00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 1826 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3087 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.110 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.113 using credential 00000000-0000-0000-0000-000000000002 00:00:00.119 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.139 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.162 Using shallow fetch with depth 1 00:00:00.162 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.162 > git --version # timeout=10 00:00:00.185 > git --version # 'git version 2.39.2' 00:00:00.185 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.185 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.186 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.652 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.664 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.676 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:04.676 > git config core.sparsecheckout # timeout=10 00:00:04.688 > git read-tree -mu HEAD # timeout=10 00:00:04.702 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:04.718 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:04.718 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:04.795 [Pipeline] Start of Pipeline 00:00:04.809 [Pipeline] library 00:00:04.810 Loading library shm_lib@master 00:00:04.811 Library shm_lib@master is cached. Copying from home. 00:00:04.827 [Pipeline] node 00:00:04.836 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.837 [Pipeline] { 00:00:04.847 [Pipeline] catchError 00:00:04.849 [Pipeline] { 00:00:04.862 [Pipeline] wrap 00:00:04.870 [Pipeline] { 00:00:04.876 [Pipeline] stage 00:00:04.877 [Pipeline] { (Prologue) 00:00:05.036 [Pipeline] sh 00:00:05.320 + logger -p user.info -t JENKINS-CI 00:00:05.334 [Pipeline] echo 00:00:05.335 Node: GP11 00:00:05.341 [Pipeline] sh 00:00:05.638 [Pipeline] setCustomBuildProperty 00:00:05.651 [Pipeline] echo 00:00:05.653 Cleanup processes 00:00:05.658 [Pipeline] sh 00:00:05.942 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.942 295863 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.952 [Pipeline] sh 00:00:06.229 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.229 ++ grep -v 'sudo pgrep' 00:00:06.229 ++ awk '{print $1}' 00:00:06.229 + sudo kill -9 00:00:06.229 + true 00:00:06.241 [Pipeline] cleanWs 00:00:06.249 [WS-CLEANUP] Deleting project workspace... 00:00:06.249 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.254 [WS-CLEANUP] done 00:00:06.258 [Pipeline] setCustomBuildProperty 00:00:06.270 [Pipeline] sh 00:00:06.573 + sudo git config --global --replace-all safe.directory '*' 00:00:06.642 [Pipeline] nodesByLabel 00:00:06.643 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.652 [Pipeline] httpRequest 00:00:06.656 HttpMethod: GET 00:00:06.657 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.661 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:06.672 Response Code: HTTP/1.1 200 OK 00:00:06.673 Success: Status code 200 is in the accepted range: 200,404 00:00:06.673 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:12.350 [Pipeline] sh 00:00:12.634 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:12.652 [Pipeline] httpRequest 00:00:12.657 HttpMethod: GET 00:00:12.657 URL: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:12.658 Sending request to url: http://10.211.164.101/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:12.664 Response Code: HTTP/1.1 200 OK 00:00:12.665 Success: Status code 200 is in the accepted range: 200,404 00:00:12.665 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:22.017 [Pipeline] sh 00:01:22.301 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:01:25.599 [Pipeline] sh 00:01:25.880 + git -C spdk log --oneline -n5 00:01:25.880 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:01:25.880 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:01:25.880 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:01:25.880 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:01:25.880 3b33f4333 test/nvme/cuse: Fix typo 00:01:25.892 [Pipeline] } 00:01:25.908 [Pipeline] // stage 00:01:25.917 [Pipeline] stage 00:01:25.919 [Pipeline] { (Prepare) 00:01:25.941 [Pipeline] writeFile 00:01:25.958 [Pipeline] sh 00:01:26.260 + logger -p user.info -t JENKINS-CI 00:01:26.274 [Pipeline] sh 00:01:26.554 + logger -p user.info -t JENKINS-CI 00:01:26.567 [Pipeline] sh 00:01:26.849 + cat autorun-spdk.conf 00:01:26.849 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.850 SPDK_TEST_NVMF=1 00:01:26.850 SPDK_TEST_NVME_CLI=1 00:01:26.850 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.850 SPDK_TEST_NVMF_NICS=e810 00:01:26.850 SPDK_RUN_UBSAN=1 00:01:26.850 NET_TYPE=phy 00:01:26.857 RUN_NIGHTLY=1 00:01:26.861 [Pipeline] readFile 00:01:26.886 [Pipeline] withEnv 00:01:26.888 [Pipeline] { 00:01:26.901 [Pipeline] sh 00:01:27.183 + set -ex 00:01:27.183 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:27.183 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.183 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.183 ++ SPDK_TEST_NVMF=1 00:01:27.183 ++ SPDK_TEST_NVME_CLI=1 00:01:27.183 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.183 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.183 ++ SPDK_RUN_UBSAN=1 00:01:27.183 ++ NET_TYPE=phy 00:01:27.183 ++ RUN_NIGHTLY=1 00:01:27.183 + case $SPDK_TEST_NVMF_NICS in 00:01:27.183 + DRIVERS=ice 00:01:27.183 + [[ tcp == \r\d\m\a ]] 00:01:27.183 + [[ -n ice ]] 00:01:27.183 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:27.183 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:27.183 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:27.183 rmmod: ERROR: Module irdma is not currently loaded 00:01:27.183 rmmod: ERROR: Module i40iw is not currently loaded 00:01:27.183 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:27.183 + true 00:01:27.183 + for D in $DRIVERS 00:01:27.183 + sudo modprobe ice 00:01:27.184 + exit 0 00:01:27.193 [Pipeline] } 00:01:27.210 [Pipeline] // withEnv 00:01:27.215 [Pipeline] } 00:01:27.233 [Pipeline] // stage 00:01:27.242 [Pipeline] catchError 00:01:27.244 [Pipeline] { 00:01:27.259 [Pipeline] timeout 00:01:27.260 Timeout set to expire in 40 min 00:01:27.262 [Pipeline] { 00:01:27.277 [Pipeline] stage 00:01:27.278 [Pipeline] { (Tests) 00:01:27.290 [Pipeline] sh 00:01:27.570 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.570 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.570 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.570 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:27.570 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.570 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.570 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:27.570 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.570 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.570 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.570 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.570 + source /etc/os-release 00:01:27.570 ++ NAME='Fedora Linux' 00:01:27.570 ++ VERSION='38 (Cloud Edition)' 00:01:27.570 ++ ID=fedora 00:01:27.570 ++ VERSION_ID=38 00:01:27.570 ++ VERSION_CODENAME= 00:01:27.570 ++ PLATFORM_ID=platform:f38 00:01:27.570 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:27.570 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:27.570 ++ LOGO=fedora-logo-icon 00:01:27.570 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:27.570 ++ HOME_URL=https://fedoraproject.org/ 00:01:27.570 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:27.570 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:27.570 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:27.570 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:27.570 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:27.570 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:27.570 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:27.570 ++ SUPPORT_END=2024-05-14 00:01:27.570 ++ VARIANT='Cloud Edition' 00:01:27.571 ++ VARIANT_ID=cloud 00:01:27.571 + uname -a 00:01:27.571 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:27.571 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:28.505 Hugepages 00:01:28.505 node hugesize free / total 00:01:28.505 node0 1048576kB 0 / 0 00:01:28.505 node0 2048kB 0 / 0 00:01:28.505 node1 1048576kB 0 / 0 00:01:28.505 node1 2048kB 0 / 0 00:01:28.505 00:01:28.505 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.505 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:28.505 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:28.505 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:28.505 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:28.505 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:28.505 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:28.505 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:28.505 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:28.505 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:28.505 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:28.505 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:28.505 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:28.505 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:28.505 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:28.505 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:28.505 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:28.763 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:28.763 + rm -f /tmp/spdk-ld-path 00:01:28.763 + source autorun-spdk.conf 00:01:28.763 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.763 ++ SPDK_TEST_NVMF=1 00:01:28.763 ++ SPDK_TEST_NVME_CLI=1 00:01:28.763 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.763 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.763 ++ SPDK_RUN_UBSAN=1 00:01:28.763 ++ NET_TYPE=phy 00:01:28.763 ++ RUN_NIGHTLY=1 00:01:28.763 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.763 + [[ -n '' ]] 00:01:28.763 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.763 + for M in /var/spdk/build-*-manifest.txt 00:01:28.763 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.763 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.763 + for M in /var/spdk/build-*-manifest.txt 00:01:28.763 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.763 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.764 ++ uname 00:01:28.764 + [[ Linux == \L\i\n\u\x ]] 00:01:28.764 + sudo dmesg -T 00:01:28.764 + sudo dmesg --clear 00:01:28.764 + dmesg_pid=296615 00:01:28.764 + [[ Fedora Linux == FreeBSD ]] 00:01:28.764 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.764 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.764 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.764 + sudo dmesg -Tw 00:01:28.764 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:28.764 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:28.764 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.764 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.764 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.764 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.764 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.764 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.764 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.764 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.764 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.764 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.764 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.764 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.764 Test configuration: 00:01:28.764 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.764 SPDK_TEST_NVMF=1 00:01:28.764 SPDK_TEST_NVME_CLI=1 00:01:28.764 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.764 SPDK_TEST_NVMF_NICS=e810 00:01:28.764 SPDK_RUN_UBSAN=1 00:01:28.764 NET_TYPE=phy 00:01:28.764 RUN_NIGHTLY=1 06:38:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:28.764 06:38:42 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.764 06:38:42 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.764 06:38:42 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.764 06:38:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.764 06:38:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.764 06:38:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.764 06:38:42 -- paths/export.sh@5 -- $ export PATH 00:01:28.764 06:38:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.764 06:38:42 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:28.764 06:38:42 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:28.764 06:38:42 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715747922.XXXXXX 00:01:28.764 06:38:42 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715747922.YeYiaK 00:01:28.764 06:38:42 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:28.764 06:38:42 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:28.764 06:38:42 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:28.764 06:38:42 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.764 06:38:42 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.764 06:38:42 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:28.764 06:38:42 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:28.764 06:38:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.764 06:38:42 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:28.764 06:38:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:28.764 06:38:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:28.764 06:38:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.764 06:38:42 -- spdk/autobuild.sh@16 -- $ date -u 00:01:28.764 Wed May 15 04:38:42 AM UTC 2024 00:01:28.764 06:38:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:28.764 LTS-24-g36faa8c31 00:01:28.764 06:38:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:28.764 06:38:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:28.764 06:38:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:28.764 06:38:42 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:28.764 06:38:42 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:28.764 06:38:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.764 ************************************ 00:01:28.764 START TEST ubsan 00:01:28.764 ************************************ 00:01:28.764 06:38:42 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:28.764 using ubsan 00:01:28.764 00:01:28.764 real 0m0.000s 00:01:28.764 user 0m0.000s 00:01:28.764 sys 0m0.000s 00:01:28.764 06:38:42 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.764 06:38:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.764 ************************************ 00:01:28.764 END TEST ubsan 00:01:28.764 ************************************ 00:01:28.764 06:38:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:28.764 06:38:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:28.764 06:38:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:28.764 06:38:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:28.764 06:38:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.764 06:38:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.764 06:38:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.764 06:38:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:28.764 06:38:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:29.022 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:29.022 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:29.280 Using 'verbs' RDMA provider 00:01:39.518 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:49.545 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:49.545 Creating mk/config.mk...done. 00:01:49.545 Creating mk/cc.flags.mk...done. 00:01:49.545 Type 'make' to build. 00:01:49.545 06:39:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:49.545 06:39:03 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:49.545 06:39:03 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:49.545 06:39:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.545 ************************************ 00:01:49.545 START TEST make 00:01:49.545 ************************************ 00:01:49.545 06:39:03 -- common/autotest_common.sh@1104 -- $ make -j48 00:01:49.545 make[1]: Nothing to be done for 'all'. 00:01:57.686 The Meson build system 00:01:57.686 Version: 1.3.1 00:01:57.686 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:57.686 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:57.686 Build type: native build 00:01:57.686 Program cat found: YES (/usr/bin/cat) 00:01:57.686 Project name: DPDK 00:01:57.686 Project version: 23.11.0 00:01:57.686 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:57.686 C linker for the host machine: cc ld.bfd 2.39-16 00:01:57.686 Host machine cpu family: x86_64 00:01:57.686 Host machine cpu: x86_64 00:01:57.686 Message: ## Building in Developer Mode ## 00:01:57.686 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.686 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:57.686 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.686 Program python3 found: YES (/usr/bin/python3) 00:01:57.686 Program cat found: YES (/usr/bin/cat) 00:01:57.686 Compiler for C supports arguments -march=native: YES 00:01:57.686 Checking for size of "void *" : 8 00:01:57.686 Checking for size of "void *" : 8 (cached) 00:01:57.686 Library m found: YES 00:01:57.686 Library numa found: YES 00:01:57.686 Has header "numaif.h" : YES 00:01:57.686 Library fdt found: NO 00:01:57.686 Library execinfo found: NO 00:01:57.686 Has header "execinfo.h" : YES 00:01:57.686 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:57.686 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.686 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.686 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.686 Run-time dependency openssl found: YES 3.0.9 00:01:57.686 Run-time dependency libpcap found: YES 1.10.4 00:01:57.686 Has header "pcap.h" with dependency libpcap: YES 00:01:57.686 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.686 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.686 Compiler for C supports arguments -Wformat: YES 00:01:57.686 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:57.686 Compiler for C supports arguments -Wformat-security: NO 00:01:57.686 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.686 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.686 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.686 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.686 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.686 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.686 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.686 Compiler for C supports arguments -Wundef: YES 00:01:57.686 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.686 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.686 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.686 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.686 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:57.686 Program objdump found: YES (/usr/bin/objdump) 00:01:57.686 Compiler for C supports arguments -mavx512f: YES 00:01:57.686 Checking if "AVX512 checking" compiles: YES 00:01:57.686 Fetching value of define "__SSE4_2__" : 1 00:01:57.686 Fetching value of define "__AES__" : 1 00:01:57.686 Fetching value of define "__AVX__" : 1 00:01:57.686 Fetching value of define "__AVX2__" : (undefined) 00:01:57.686 Fetching value of define "__AVX512BW__" : (undefined) 00:01:57.686 Fetching value of define "__AVX512CD__" : (undefined) 00:01:57.686 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:57.686 Fetching value of define "__AVX512F__" : (undefined) 00:01:57.686 Fetching value of define "__AVX512VL__" : (undefined) 00:01:57.686 Fetching value of define "__PCLMUL__" : 1 00:01:57.686 Fetching value of define "__RDRND__" : 1 00:01:57.686 Fetching value of define "__RDSEED__" : (undefined) 00:01:57.686 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:57.686 Fetching value of define "__znver1__" : (undefined) 00:01:57.686 Fetching value of define "__znver2__" : (undefined) 00:01:57.686 Fetching value of define "__znver3__" : (undefined) 00:01:57.686 Fetching value of define "__znver4__" : (undefined) 00:01:57.686 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.686 Message: lib/log: Defining dependency "log" 00:01:57.686 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.686 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.686 Checking for function "getentropy" : NO 00:01:57.686 Message: lib/eal: Defining dependency "eal" 00:01:57.686 Message: lib/ring: Defining dependency "ring" 00:01:57.686 Message: lib/rcu: Defining dependency "rcu" 00:01:57.686 Message: lib/mempool: Defining dependency "mempool" 00:01:57.686 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.686 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.686 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:57.686 Compiler for C supports arguments -mpclmul: YES 00:01:57.686 Compiler for C supports arguments -maes: YES 00:01:57.686 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.686 Compiler for C supports arguments -mavx512bw: YES 00:01:57.686 Compiler for C supports arguments -mavx512dq: YES 00:01:57.686 Compiler for C supports arguments -mavx512vl: YES 00:01:57.686 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.686 Compiler for C supports arguments -mavx2: YES 00:01:57.686 Compiler for C supports arguments -mavx: YES 00:01:57.686 Message: lib/net: Defining dependency "net" 00:01:57.686 Message: lib/meter: Defining dependency "meter" 00:01:57.686 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.686 Message: lib/pci: Defining dependency "pci" 00:01:57.686 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.686 Message: lib/hash: Defining dependency "hash" 00:01:57.686 Message: lib/timer: Defining dependency "timer" 00:01:57.686 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.686 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.686 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.686 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.686 Message: lib/power: Defining dependency "power" 00:01:57.686 Message: lib/reorder: Defining dependency "reorder" 00:01:57.686 Message: lib/security: Defining dependency "security" 00:01:57.686 Has header "linux/userfaultfd.h" : YES 00:01:57.686 Has header "linux/vduse.h" : YES 00:01:57.686 Message: lib/vhost: Defining dependency "vhost" 00:01:57.686 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.686 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.686 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.686 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.686 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:57.686 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:57.686 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:57.686 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:57.686 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:57.686 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:57.686 Program doxygen found: YES (/usr/bin/doxygen) 00:01:57.686 Configuring doxy-api-html.conf using configuration 00:01:57.686 Configuring doxy-api-man.conf using configuration 00:01:57.686 Program mandb found: YES (/usr/bin/mandb) 00:01:57.686 Program sphinx-build found: NO 00:01:57.686 Configuring rte_build_config.h using configuration 00:01:57.686 Message: 00:01:57.686 ================= 00:01:57.686 Applications Enabled 00:01:57.686 ================= 00:01:57.686 00:01:57.686 apps: 00:01:57.686 00:01:57.686 00:01:57.686 Message: 00:01:57.686 ================= 00:01:57.686 Libraries Enabled 00:01:57.686 ================= 00:01:57.686 00:01:57.686 libs: 00:01:57.686 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.686 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:57.686 cryptodev, dmadev, power, reorder, security, vhost, 00:01:57.686 00:01:57.686 Message: 00:01:57.686 =============== 00:01:57.686 Drivers Enabled 00:01:57.686 =============== 00:01:57.686 00:01:57.686 common: 00:01:57.686 00:01:57.686 bus: 00:01:57.687 pci, vdev, 00:01:57.687 mempool: 00:01:57.687 ring, 00:01:57.687 dma: 00:01:57.687 00:01:57.687 net: 00:01:57.687 00:01:57.687 crypto: 00:01:57.687 00:01:57.687 compress: 00:01:57.687 00:01:57.687 vdpa: 00:01:57.687 00:01:57.687 00:01:57.687 Message: 00:01:57.687 ================= 00:01:57.687 Content Skipped 00:01:57.687 ================= 00:01:57.687 00:01:57.687 apps: 00:01:57.687 dumpcap: explicitly disabled via build config 00:01:57.687 graph: explicitly disabled via build config 00:01:57.687 pdump: explicitly disabled via build config 00:01:57.687 proc-info: explicitly disabled via build config 00:01:57.687 test-acl: explicitly disabled via build config 00:01:57.687 test-bbdev: explicitly disabled via build config 00:01:57.687 test-cmdline: explicitly disabled via build config 00:01:57.687 test-compress-perf: explicitly disabled via build config 00:01:57.687 test-crypto-perf: explicitly disabled via build config 00:01:57.687 test-dma-perf: explicitly disabled via build config 00:01:57.687 test-eventdev: explicitly disabled via build config 00:01:57.687 test-fib: explicitly disabled via build config 00:01:57.687 test-flow-perf: explicitly disabled via build config 00:01:57.687 test-gpudev: explicitly disabled via build config 00:01:57.687 test-mldev: explicitly disabled via build config 00:01:57.687 test-pipeline: explicitly disabled via build config 00:01:57.687 test-pmd: explicitly disabled via build config 00:01:57.687 test-regex: explicitly disabled via build config 00:01:57.687 test-sad: explicitly disabled via build config 00:01:57.687 test-security-perf: explicitly disabled via build config 00:01:57.687 00:01:57.687 libs: 00:01:57.687 metrics: explicitly disabled via build config 00:01:57.687 acl: explicitly disabled via build config 00:01:57.687 bbdev: explicitly disabled via build config 00:01:57.687 bitratestats: explicitly disabled via build config 00:01:57.687 bpf: explicitly disabled via build config 00:01:57.687 cfgfile: explicitly disabled via build config 00:01:57.687 distributor: explicitly disabled via build config 00:01:57.687 efd: explicitly disabled via build config 00:01:57.687 eventdev: explicitly disabled via build config 00:01:57.687 dispatcher: explicitly disabled via build config 00:01:57.687 gpudev: explicitly disabled via build config 00:01:57.687 gro: explicitly disabled via build config 00:01:57.687 gso: explicitly disabled via build config 00:01:57.687 ip_frag: explicitly disabled via build config 00:01:57.687 jobstats: explicitly disabled via build config 00:01:57.687 latencystats: explicitly disabled via build config 00:01:57.687 lpm: explicitly disabled via build config 00:01:57.687 member: explicitly disabled via build config 00:01:57.687 pcapng: explicitly disabled via build config 00:01:57.687 rawdev: explicitly disabled via build config 00:01:57.687 regexdev: explicitly disabled via build config 00:01:57.687 mldev: explicitly disabled via build config 00:01:57.687 rib: explicitly disabled via build config 00:01:57.687 sched: explicitly disabled via build config 00:01:57.687 stack: explicitly disabled via build config 00:01:57.687 ipsec: explicitly disabled via build config 00:01:57.687 pdcp: explicitly disabled via build config 00:01:57.687 fib: explicitly disabled via build config 00:01:57.687 port: explicitly disabled via build config 00:01:57.687 pdump: explicitly disabled via build config 00:01:57.687 table: explicitly disabled via build config 00:01:57.687 pipeline: explicitly disabled via build config 00:01:57.687 graph: explicitly disabled via build config 00:01:57.687 node: explicitly disabled via build config 00:01:57.687 00:01:57.687 drivers: 00:01:57.687 common/cpt: not in enabled drivers build config 00:01:57.687 common/dpaax: not in enabled drivers build config 00:01:57.687 common/iavf: not in enabled drivers build config 00:01:57.687 common/idpf: not in enabled drivers build config 00:01:57.687 common/mvep: not in enabled drivers build config 00:01:57.687 common/octeontx: not in enabled drivers build config 00:01:57.687 bus/auxiliary: not in enabled drivers build config 00:01:57.687 bus/cdx: not in enabled drivers build config 00:01:57.687 bus/dpaa: not in enabled drivers build config 00:01:57.687 bus/fslmc: not in enabled drivers build config 00:01:57.687 bus/ifpga: not in enabled drivers build config 00:01:57.687 bus/platform: not in enabled drivers build config 00:01:57.687 bus/vmbus: not in enabled drivers build config 00:01:57.687 common/cnxk: not in enabled drivers build config 00:01:57.687 common/mlx5: not in enabled drivers build config 00:01:57.687 common/nfp: not in enabled drivers build config 00:01:57.687 common/qat: not in enabled drivers build config 00:01:57.687 common/sfc_efx: not in enabled drivers build config 00:01:57.687 mempool/bucket: not in enabled drivers build config 00:01:57.687 mempool/cnxk: not in enabled drivers build config 00:01:57.687 mempool/dpaa: not in enabled drivers build config 00:01:57.687 mempool/dpaa2: not in enabled drivers build config 00:01:57.687 mempool/octeontx: not in enabled drivers build config 00:01:57.687 mempool/stack: not in enabled drivers build config 00:01:57.687 dma/cnxk: not in enabled drivers build config 00:01:57.687 dma/dpaa: not in enabled drivers build config 00:01:57.687 dma/dpaa2: not in enabled drivers build config 00:01:57.687 dma/hisilicon: not in enabled drivers build config 00:01:57.687 dma/idxd: not in enabled drivers build config 00:01:57.687 dma/ioat: not in enabled drivers build config 00:01:57.687 dma/skeleton: not in enabled drivers build config 00:01:57.687 net/af_packet: not in enabled drivers build config 00:01:57.687 net/af_xdp: not in enabled drivers build config 00:01:57.687 net/ark: not in enabled drivers build config 00:01:57.687 net/atlantic: not in enabled drivers build config 00:01:57.687 net/avp: not in enabled drivers build config 00:01:57.687 net/axgbe: not in enabled drivers build config 00:01:57.687 net/bnx2x: not in enabled drivers build config 00:01:57.687 net/bnxt: not in enabled drivers build config 00:01:57.687 net/bonding: not in enabled drivers build config 00:01:57.687 net/cnxk: not in enabled drivers build config 00:01:57.687 net/cpfl: not in enabled drivers build config 00:01:57.687 net/cxgbe: not in enabled drivers build config 00:01:57.687 net/dpaa: not in enabled drivers build config 00:01:57.687 net/dpaa2: not in enabled drivers build config 00:01:57.687 net/e1000: not in enabled drivers build config 00:01:57.687 net/ena: not in enabled drivers build config 00:01:57.687 net/enetc: not in enabled drivers build config 00:01:57.687 net/enetfec: not in enabled drivers build config 00:01:57.687 net/enic: not in enabled drivers build config 00:01:57.687 net/failsafe: not in enabled drivers build config 00:01:57.687 net/fm10k: not in enabled drivers build config 00:01:57.687 net/gve: not in enabled drivers build config 00:01:57.687 net/hinic: not in enabled drivers build config 00:01:57.687 net/hns3: not in enabled drivers build config 00:01:57.687 net/i40e: not in enabled drivers build config 00:01:57.687 net/iavf: not in enabled drivers build config 00:01:57.687 net/ice: not in enabled drivers build config 00:01:57.687 net/idpf: not in enabled drivers build config 00:01:57.687 net/igc: not in enabled drivers build config 00:01:57.687 net/ionic: not in enabled drivers build config 00:01:57.687 net/ipn3ke: not in enabled drivers build config 00:01:57.687 net/ixgbe: not in enabled drivers build config 00:01:57.687 net/mana: not in enabled drivers build config 00:01:57.687 net/memif: not in enabled drivers build config 00:01:57.687 net/mlx4: not in enabled drivers build config 00:01:57.687 net/mlx5: not in enabled drivers build config 00:01:57.687 net/mvneta: not in enabled drivers build config 00:01:57.687 net/mvpp2: not in enabled drivers build config 00:01:57.687 net/netvsc: not in enabled drivers build config 00:01:57.687 net/nfb: not in enabled drivers build config 00:01:57.687 net/nfp: not in enabled drivers build config 00:01:57.687 net/ngbe: not in enabled drivers build config 00:01:57.687 net/null: not in enabled drivers build config 00:01:57.687 net/octeontx: not in enabled drivers build config 00:01:57.687 net/octeon_ep: not in enabled drivers build config 00:01:57.687 net/pcap: not in enabled drivers build config 00:01:57.687 net/pfe: not in enabled drivers build config 00:01:57.687 net/qede: not in enabled drivers build config 00:01:57.687 net/ring: not in enabled drivers build config 00:01:57.687 net/sfc: not in enabled drivers build config 00:01:57.687 net/softnic: not in enabled drivers build config 00:01:57.687 net/tap: not in enabled drivers build config 00:01:57.687 net/thunderx: not in enabled drivers build config 00:01:57.687 net/txgbe: not in enabled drivers build config 00:01:57.687 net/vdev_netvsc: not in enabled drivers build config 00:01:57.687 net/vhost: not in enabled drivers build config 00:01:57.687 net/virtio: not in enabled drivers build config 00:01:57.687 net/vmxnet3: not in enabled drivers build config 00:01:57.687 raw/*: missing internal dependency, "rawdev" 00:01:57.687 crypto/armv8: not in enabled drivers build config 00:01:57.687 crypto/bcmfs: not in enabled drivers build config 00:01:57.687 crypto/caam_jr: not in enabled drivers build config 00:01:57.687 crypto/ccp: not in enabled drivers build config 00:01:57.687 crypto/cnxk: not in enabled drivers build config 00:01:57.687 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.687 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.687 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.687 crypto/mlx5: not in enabled drivers build config 00:01:57.687 crypto/mvsam: not in enabled drivers build config 00:01:57.687 crypto/nitrox: not in enabled drivers build config 00:01:57.687 crypto/null: not in enabled drivers build config 00:01:57.687 crypto/octeontx: not in enabled drivers build config 00:01:57.687 crypto/openssl: not in enabled drivers build config 00:01:57.687 crypto/scheduler: not in enabled drivers build config 00:01:57.687 crypto/uadk: not in enabled drivers build config 00:01:57.687 crypto/virtio: not in enabled drivers build config 00:01:57.687 compress/isal: not in enabled drivers build config 00:01:57.687 compress/mlx5: not in enabled drivers build config 00:01:57.687 compress/octeontx: not in enabled drivers build config 00:01:57.687 compress/zlib: not in enabled drivers build config 00:01:57.687 regex/*: missing internal dependency, "regexdev" 00:01:57.687 ml/*: missing internal dependency, "mldev" 00:01:57.687 vdpa/ifc: not in enabled drivers build config 00:01:57.687 vdpa/mlx5: not in enabled drivers build config 00:01:57.687 vdpa/nfp: not in enabled drivers build config 00:01:57.687 vdpa/sfc: not in enabled drivers build config 00:01:57.687 event/*: missing internal dependency, "eventdev" 00:01:57.687 baseband/*: missing internal dependency, "bbdev" 00:01:57.688 gpu/*: missing internal dependency, "gpudev" 00:01:57.688 00:01:57.688 00:01:58.253 Build targets in project: 85 00:01:58.253 00:01:58.253 DPDK 23.11.0 00:01:58.253 00:01:58.253 User defined options 00:01:58.253 buildtype : debug 00:01:58.253 default_library : shared 00:01:58.253 libdir : lib 00:01:58.253 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:58.253 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:58.253 c_link_args : 00:01:58.253 cpu_instruction_set: native 00:01:58.253 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:58.253 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:58.253 enable_docs : false 00:01:58.253 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:58.253 enable_kmods : false 00:01:58.253 tests : false 00:01:58.253 00:01:58.253 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:58.517 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:58.776 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:58.776 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:58.776 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:58.776 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:58.776 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:58.776 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:58.776 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:58.776 [8/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:58.776 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.776 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:58.776 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:58.776 [12/265] Linking static target lib/librte_kvargs.a 00:01:58.776 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:58.776 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:58.776 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:58.776 [16/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:58.776 [17/265] Linking static target lib/librte_log.a 00:01:58.776 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:58.776 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:58.776 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:59.036 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:59.299 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.574 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:59.574 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:59.574 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:59.574 [26/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:59.574 [27/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:59.574 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:59.574 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:59.574 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:59.574 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:59.574 [32/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:59.574 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:59.574 [34/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:59.575 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:59.575 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:59.575 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:59.575 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:59.575 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:59.575 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:59.575 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:59.575 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:59.575 [43/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:59.575 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:59.575 [45/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:59.575 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:59.575 [47/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:59.575 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:59.575 [49/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:59.575 [50/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:59.575 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:59.575 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:59.575 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:59.575 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:59.575 [55/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:59.575 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:59.575 [57/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:59.575 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:59.575 [59/265] Linking static target lib/librte_telemetry.a 00:01:59.833 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:59.833 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:59.833 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:59.833 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:59.833 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:59.833 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:59.833 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:59.833 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:59.833 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:59.833 [69/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:59.833 [70/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.833 [71/265] Linking static target lib/librte_pci.a 00:01:59.833 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:59.833 [73/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:59.833 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.096 [75/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:00.096 [76/265] Linking target lib/librte_log.so.24.0 00:02:00.096 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.096 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.096 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.096 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:00.096 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.096 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:00.358 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.358 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.358 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.358 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.358 [87/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:00.358 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:00.358 [89/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:00.358 [90/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:00.358 [91/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.358 [92/265] Linking target lib/librte_kvargs.so.24.0 00:02:00.621 [93/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.621 [94/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:00.621 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:00.621 [96/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.622 [97/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:00.622 [98/265] Linking static target lib/librte_ring.a 00:02:00.622 [99/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:00.622 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:00.622 [101/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:00.622 [102/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:00.622 [103/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.622 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:00.622 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:00.622 [106/265] Linking static target lib/librte_eal.a 00:02:00.622 [107/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:00.622 [108/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.622 [109/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:00.622 [110/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:00.622 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:00.622 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:00.622 [113/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:00.622 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:00.622 [115/265] Linking static target lib/librte_meter.a 00:02:00.622 [116/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:00.884 [117/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:00.884 [118/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:00.884 [119/265] Linking target lib/librte_telemetry.so.24.0 00:02:00.884 [120/265] Linking static target lib/librte_mempool.a 00:02:00.884 [121/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:00.884 [122/265] Linking static target lib/librte_rcu.a 00:02:00.884 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:00.884 [124/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:00.884 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:00.884 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:00.884 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:00.884 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:00.884 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:00.884 [130/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:00.884 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:00.884 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:01.153 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:01.153 [134/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:01.153 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:01.153 [136/265] Linking static target lib/librte_cmdline.a 00:02:01.153 [137/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:01.153 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.153 [139/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:01.153 [140/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.153 [141/265] Linking static target lib/librte_net.a 00:02:01.153 [142/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.418 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:01.418 [144/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:01.418 [145/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.418 [146/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:01.418 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:01.418 [148/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.418 [149/265] Linking static target lib/librte_timer.a 00:02:01.418 [150/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:01.418 [151/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:01.418 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:01.418 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.418 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.678 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:01.678 [156/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.678 [157/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.678 [158/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:01.678 [159/265] Linking static target lib/librte_dmadev.a 00:02:01.678 [160/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:01.678 [161/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.678 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.678 [163/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:01.678 [164/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:01.678 [165/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:01.678 [166/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:01.678 [167/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.936 [168/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:01.936 [169/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:01.936 [170/265] Linking static target lib/librte_hash.a 00:02:01.936 [171/265] Linking static target lib/librte_power.a 00:02:01.936 [172/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:01.936 [173/265] Linking static target lib/librte_compressdev.a 00:02:01.936 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:01.936 [175/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:01.936 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:01.936 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:01.936 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:01.936 [179/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.936 [180/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:01.936 [181/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.936 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:02.194 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:02.194 [184/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:02.194 [185/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:02.194 [186/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:02.194 [187/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:02.194 [188/265] Linking static target lib/librte_reorder.a 00:02:02.194 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:02.194 [190/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:02.194 [191/265] Linking static target lib/librte_security.a 00:02:02.194 [192/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:02.194 [193/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:02.194 [194/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.194 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:02.194 [196/265] Linking static target lib/librte_mbuf.a 00:02:02.194 [197/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.194 [198/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:02.194 [199/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:02.194 [200/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:02.452 [201/265] Linking static target drivers/librte_bus_vdev.a 00:02:02.452 [202/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:02.452 [203/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:02.452 [204/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.452 [205/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.452 [206/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:02.452 [207/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:02.452 [208/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:02.452 [209/265] Linking static target drivers/librte_bus_pci.a 00:02:02.452 [210/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:02.452 [211/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:02.452 [212/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.452 [213/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:02.452 [214/265] Linking static target drivers/librte_mempool_ring.a 00:02:02.452 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.452 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.452 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:02.452 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:02.710 [219/265] Linking static target lib/librte_ethdev.a 00:02:02.710 [220/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.710 [221/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:02.710 [222/265] Linking static target lib/librte_cryptodev.a 00:02:02.968 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.902 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.837 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:06.739 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.997 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.997 [228/265] Linking target lib/librte_eal.so.24.0 00:02:06.997 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:07.255 [230/265] Linking target lib/librte_meter.so.24.0 00:02:07.255 [231/265] Linking target lib/librte_pci.so.24.0 00:02:07.255 [232/265] Linking target lib/librte_dmadev.so.24.0 00:02:07.255 [233/265] Linking target lib/librte_ring.so.24.0 00:02:07.255 [234/265] Linking target lib/librte_timer.so.24.0 00:02:07.255 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:07.255 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:07.255 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:07.255 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:07.255 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:07.255 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:07.255 [241/265] Linking target lib/librte_rcu.so.24.0 00:02:07.255 [242/265] Linking target lib/librte_mempool.so.24.0 00:02:07.255 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:07.513 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:07.513 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:07.513 [246/265] Linking target lib/librte_mbuf.so.24.0 00:02:07.513 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:07.513 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:07.513 [249/265] Linking target lib/librte_compressdev.so.24.0 00:02:07.513 [250/265] Linking target lib/librte_reorder.so.24.0 00:02:07.513 [251/265] Linking target lib/librte_net.so.24.0 00:02:07.513 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:02:07.770 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:07.770 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:07.770 [255/265] Linking target lib/librte_cmdline.so.24.0 00:02:07.770 [256/265] Linking target lib/librte_hash.so.24.0 00:02:07.770 [257/265] Linking target lib/librte_security.so.24.0 00:02:07.770 [258/265] Linking target lib/librte_ethdev.so.24.0 00:02:08.028 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:08.028 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:08.028 [261/265] Linking target lib/librte_power.so.24.0 00:02:10.610 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:10.610 [263/265] Linking static target lib/librte_vhost.a 00:02:11.548 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.548 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:11.548 INFO: autodetecting backend as ninja 00:02:11.548 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:12.486 CC lib/log/log.o 00:02:12.486 CC lib/log/log_flags.o 00:02:12.486 CC lib/log/log_deprecated.o 00:02:12.486 CC lib/ut_mock/mock.o 00:02:12.486 CC lib/ut/ut.o 00:02:12.486 LIB libspdk_ut_mock.a 00:02:12.486 SO libspdk_ut_mock.so.5.0 00:02:12.486 LIB libspdk_log.a 00:02:12.486 LIB libspdk_ut.a 00:02:12.486 SO libspdk_ut.so.1.0 00:02:12.486 SO libspdk_log.so.6.1 00:02:12.486 SYMLINK libspdk_ut_mock.so 00:02:12.486 SYMLINK libspdk_ut.so 00:02:12.486 SYMLINK libspdk_log.so 00:02:12.744 CC lib/dma/dma.o 00:02:12.744 CXX lib/trace_parser/trace.o 00:02:12.744 CC lib/ioat/ioat.o 00:02:12.744 CC lib/util/base64.o 00:02:12.744 CC lib/util/bit_array.o 00:02:12.744 CC lib/util/cpuset.o 00:02:12.744 CC lib/util/crc16.o 00:02:12.745 CC lib/util/crc32.o 00:02:12.745 CC lib/util/crc32c.o 00:02:12.745 CC lib/util/crc32_ieee.o 00:02:12.745 CC lib/util/crc64.o 00:02:12.745 CC lib/util/dif.o 00:02:12.745 CC lib/util/fd.o 00:02:12.745 CC lib/util/file.o 00:02:12.745 CC lib/util/hexlify.o 00:02:12.745 CC lib/util/iov.o 00:02:12.745 CC lib/util/math.o 00:02:12.745 CC lib/util/pipe.o 00:02:12.745 CC lib/util/strerror_tls.o 00:02:12.745 CC lib/util/string.o 00:02:12.745 CC lib/util/uuid.o 00:02:12.745 CC lib/util/fd_group.o 00:02:12.745 CC lib/util/xor.o 00:02:12.745 CC lib/util/zipf.o 00:02:12.745 CC lib/vfio_user/host/vfio_user_pci.o 00:02:12.745 CC lib/vfio_user/host/vfio_user.o 00:02:12.745 LIB libspdk_dma.a 00:02:12.745 SO libspdk_dma.so.3.0 00:02:13.003 SYMLINK libspdk_dma.so 00:02:13.003 LIB libspdk_ioat.a 00:02:13.003 SO libspdk_ioat.so.6.0 00:02:13.003 SYMLINK libspdk_ioat.so 00:02:13.003 LIB libspdk_vfio_user.a 00:02:13.003 SO libspdk_vfio_user.so.4.0 00:02:13.003 SYMLINK libspdk_vfio_user.so 00:02:13.261 LIB libspdk_util.a 00:02:13.261 SO libspdk_util.so.8.0 00:02:13.519 SYMLINK libspdk_util.so 00:02:13.519 CC lib/vmd/vmd.o 00:02:13.519 CC lib/rdma/common.o 00:02:13.519 CC lib/env_dpdk/env.o 00:02:13.519 CC lib/json/json_parse.o 00:02:13.519 CC lib/rdma/rdma_verbs.o 00:02:13.519 CC lib/env_dpdk/memory.o 00:02:13.519 CC lib/idxd/idxd.o 00:02:13.519 CC lib/env_dpdk/pci.o 00:02:13.519 CC lib/vmd/led.o 00:02:13.519 CC lib/conf/conf.o 00:02:13.519 CC lib/json/json_util.o 00:02:13.519 CC lib/idxd/idxd_user.o 00:02:13.519 CC lib/env_dpdk/init.o 00:02:13.519 CC lib/json/json_write.o 00:02:13.519 CC lib/env_dpdk/threads.o 00:02:13.519 CC lib/env_dpdk/pci_ioat.o 00:02:13.520 CC lib/env_dpdk/pci_virtio.o 00:02:13.520 CC lib/env_dpdk/pci_vmd.o 00:02:13.520 CC lib/env_dpdk/pci_idxd.o 00:02:13.520 CC lib/env_dpdk/pci_event.o 00:02:13.520 CC lib/env_dpdk/sigbus_handler.o 00:02:13.520 CC lib/env_dpdk/pci_dpdk.o 00:02:13.520 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:13.520 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:13.520 LIB libspdk_trace_parser.a 00:02:13.520 SO libspdk_trace_parser.so.4.0 00:02:13.777 SYMLINK libspdk_trace_parser.so 00:02:13.777 LIB libspdk_conf.a 00:02:13.777 SO libspdk_conf.so.5.0 00:02:13.777 LIB libspdk_rdma.a 00:02:13.777 SYMLINK libspdk_conf.so 00:02:13.777 LIB libspdk_json.a 00:02:13.777 SO libspdk_rdma.so.5.0 00:02:13.777 SO libspdk_json.so.5.1 00:02:13.777 SYMLINK libspdk_rdma.so 00:02:14.034 SYMLINK libspdk_json.so 00:02:14.034 CC lib/jsonrpc/jsonrpc_server.o 00:02:14.034 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:14.034 CC lib/jsonrpc/jsonrpc_client.o 00:02:14.034 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:14.034 LIB libspdk_idxd.a 00:02:14.034 SO libspdk_idxd.so.11.0 00:02:14.292 SYMLINK libspdk_idxd.so 00:02:14.292 LIB libspdk_vmd.a 00:02:14.292 SO libspdk_vmd.so.5.0 00:02:14.292 SYMLINK libspdk_vmd.so 00:02:14.292 LIB libspdk_jsonrpc.a 00:02:14.292 SO libspdk_jsonrpc.so.5.1 00:02:14.292 SYMLINK libspdk_jsonrpc.so 00:02:14.550 CC lib/rpc/rpc.o 00:02:14.808 LIB libspdk_rpc.a 00:02:14.808 SO libspdk_rpc.so.5.0 00:02:14.808 SYMLINK libspdk_rpc.so 00:02:14.808 CC lib/notify/notify.o 00:02:14.808 CC lib/sock/sock.o 00:02:14.808 CC lib/sock/sock_rpc.o 00:02:14.808 CC lib/notify/notify_rpc.o 00:02:14.808 CC lib/trace/trace.o 00:02:14.808 CC lib/trace/trace_flags.o 00:02:14.808 CC lib/trace/trace_rpc.o 00:02:15.066 LIB libspdk_notify.a 00:02:15.066 SO libspdk_notify.so.5.0 00:02:15.066 SYMLINK libspdk_notify.so 00:02:15.066 LIB libspdk_trace.a 00:02:15.066 SO libspdk_trace.so.9.0 00:02:15.066 SYMLINK libspdk_trace.so 00:02:15.325 LIB libspdk_sock.a 00:02:15.325 SO libspdk_sock.so.8.0 00:02:15.325 CC lib/thread/thread.o 00:02:15.325 CC lib/thread/iobuf.o 00:02:15.325 SYMLINK libspdk_sock.so 00:02:15.325 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:15.325 CC lib/nvme/nvme_ctrlr.o 00:02:15.325 CC lib/nvme/nvme_fabric.o 00:02:15.325 CC lib/nvme/nvme_ns_cmd.o 00:02:15.325 CC lib/nvme/nvme_ns.o 00:02:15.325 CC lib/nvme/nvme_pcie_common.o 00:02:15.325 CC lib/nvme/nvme_pcie.o 00:02:15.325 CC lib/nvme/nvme_qpair.o 00:02:15.325 CC lib/nvme/nvme.o 00:02:15.325 CC lib/nvme/nvme_quirks.o 00:02:15.325 CC lib/nvme/nvme_transport.o 00:02:15.325 CC lib/nvme/nvme_discovery.o 00:02:15.325 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:15.325 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:15.325 CC lib/nvme/nvme_tcp.o 00:02:15.325 CC lib/nvme/nvme_opal.o 00:02:15.325 CC lib/nvme/nvme_io_msg.o 00:02:15.325 CC lib/nvme/nvme_poll_group.o 00:02:15.325 CC lib/nvme/nvme_zns.o 00:02:15.325 CC lib/nvme/nvme_cuse.o 00:02:15.325 CC lib/nvme/nvme_vfio_user.o 00:02:15.325 CC lib/nvme/nvme_rdma.o 00:02:15.583 LIB libspdk_env_dpdk.a 00:02:15.583 SO libspdk_env_dpdk.so.13.0 00:02:15.841 SYMLINK libspdk_env_dpdk.so 00:02:16.774 LIB libspdk_thread.a 00:02:16.774 SO libspdk_thread.so.9.0 00:02:17.032 SYMLINK libspdk_thread.so 00:02:17.032 CC lib/blob/blobstore.o 00:02:17.032 CC lib/accel/accel.o 00:02:17.032 CC lib/virtio/virtio.o 00:02:17.032 CC lib/accel/accel_rpc.o 00:02:17.032 CC lib/blob/request.o 00:02:17.032 CC lib/virtio/virtio_vhost_user.o 00:02:17.032 CC lib/blob/zeroes.o 00:02:17.032 CC lib/accel/accel_sw.o 00:02:17.032 CC lib/init/json_config.o 00:02:17.032 CC lib/virtio/virtio_vfio_user.o 00:02:17.032 CC lib/blob/blob_bs_dev.o 00:02:17.032 CC lib/init/subsystem.o 00:02:17.032 CC lib/virtio/virtio_pci.o 00:02:17.032 CC lib/init/subsystem_rpc.o 00:02:17.032 CC lib/init/rpc.o 00:02:17.290 LIB libspdk_init.a 00:02:17.290 SO libspdk_init.so.4.0 00:02:17.290 LIB libspdk_virtio.a 00:02:17.290 SYMLINK libspdk_init.so 00:02:17.548 SO libspdk_virtio.so.6.0 00:02:17.548 SYMLINK libspdk_virtio.so 00:02:17.548 CC lib/event/app.o 00:02:17.548 CC lib/event/reactor.o 00:02:17.548 CC lib/event/log_rpc.o 00:02:17.548 CC lib/event/app_rpc.o 00:02:17.548 CC lib/event/scheduler_static.o 00:02:17.806 LIB libspdk_nvme.a 00:02:17.806 SO libspdk_nvme.so.12.0 00:02:17.806 LIB libspdk_event.a 00:02:18.064 SO libspdk_event.so.12.0 00:02:18.064 SYMLINK libspdk_event.so 00:02:18.064 LIB libspdk_accel.a 00:02:18.064 SO libspdk_accel.so.14.0 00:02:18.064 SYMLINK libspdk_nvme.so 00:02:18.064 SYMLINK libspdk_accel.so 00:02:18.322 CC lib/bdev/bdev.o 00:02:18.322 CC lib/bdev/bdev_rpc.o 00:02:18.322 CC lib/bdev/bdev_zone.o 00:02:18.322 CC lib/bdev/part.o 00:02:18.322 CC lib/bdev/scsi_nvme.o 00:02:19.696 LIB libspdk_blob.a 00:02:19.696 SO libspdk_blob.so.10.1 00:02:19.953 SYMLINK libspdk_blob.so 00:02:19.953 CC lib/blobfs/blobfs.o 00:02:19.953 CC lib/blobfs/tree.o 00:02:19.953 CC lib/lvol/lvol.o 00:02:20.887 LIB libspdk_bdev.a 00:02:20.887 LIB libspdk_blobfs.a 00:02:20.887 SO libspdk_blobfs.so.9.0 00:02:20.887 LIB libspdk_lvol.a 00:02:20.887 SO libspdk_bdev.so.14.0 00:02:20.887 SO libspdk_lvol.so.9.1 00:02:20.887 SYMLINK libspdk_blobfs.so 00:02:20.887 SYMLINK libspdk_lvol.so 00:02:20.887 SYMLINK libspdk_bdev.so 00:02:20.887 CC lib/nbd/nbd.o 00:02:20.887 CC lib/nvmf/ctrlr.o 00:02:20.887 CC lib/nbd/nbd_rpc.o 00:02:20.887 CC lib/nvmf/ctrlr_discovery.o 00:02:20.887 CC lib/scsi/dev.o 00:02:20.887 CC lib/nvmf/ctrlr_bdev.o 00:02:20.887 CC lib/scsi/lun.o 00:02:20.887 CC lib/ftl/ftl_core.o 00:02:20.887 CC lib/ublk/ublk.o 00:02:20.887 CC lib/scsi/port.o 00:02:20.887 CC lib/nvmf/subsystem.o 00:02:20.887 CC lib/ublk/ublk_rpc.o 00:02:20.887 CC lib/scsi/scsi.o 00:02:20.887 CC lib/ftl/ftl_init.o 00:02:20.887 CC lib/nvmf/nvmf.o 00:02:20.887 CC lib/scsi/scsi_bdev.o 00:02:20.887 CC lib/ftl/ftl_layout.o 00:02:20.887 CC lib/nvmf/nvmf_rpc.o 00:02:20.887 CC lib/scsi/scsi_pr.o 00:02:20.887 CC lib/ftl/ftl_debug.o 00:02:20.887 CC lib/nvmf/transport.o 00:02:20.887 CC lib/nvmf/tcp.o 00:02:20.887 CC lib/scsi/scsi_rpc.o 00:02:20.887 CC lib/ftl/ftl_io.o 00:02:20.887 CC lib/scsi/task.o 00:02:20.887 CC lib/ftl/ftl_sb.o 00:02:20.887 CC lib/nvmf/rdma.o 00:02:20.887 CC lib/ftl/ftl_l2p_flat.o 00:02:20.887 CC lib/ftl/ftl_l2p.o 00:02:20.887 CC lib/ftl/ftl_nv_cache.o 00:02:20.887 CC lib/ftl/ftl_band.o 00:02:20.887 CC lib/ftl/ftl_band_ops.o 00:02:20.887 CC lib/ftl/ftl_writer.o 00:02:20.887 CC lib/ftl/ftl_rq.o 00:02:20.887 CC lib/ftl/ftl_reloc.o 00:02:21.155 CC lib/ftl/ftl_l2p_cache.o 00:02:21.155 CC lib/ftl/ftl_p2l.o 00:02:21.155 CC lib/ftl/mngt/ftl_mngt.o 00:02:21.155 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:21.155 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:21.156 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:21.156 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:21.156 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:21.156 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:21.156 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:21.156 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:21.156 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:21.156 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:21.414 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:21.414 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:21.414 CC lib/ftl/utils/ftl_conf.o 00:02:21.414 CC lib/ftl/utils/ftl_md.o 00:02:21.414 CC lib/ftl/utils/ftl_mempool.o 00:02:21.414 CC lib/ftl/utils/ftl_bitmap.o 00:02:21.414 CC lib/ftl/utils/ftl_property.o 00:02:21.414 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:21.414 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:21.414 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:21.414 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:21.414 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:21.414 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:21.414 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:21.414 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:21.414 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:21.414 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:21.414 CC lib/ftl/base/ftl_base_dev.o 00:02:21.414 CC lib/ftl/base/ftl_base_bdev.o 00:02:21.414 CC lib/ftl/ftl_trace.o 00:02:21.672 LIB libspdk_nbd.a 00:02:21.672 SO libspdk_nbd.so.6.0 00:02:21.930 LIB libspdk_scsi.a 00:02:21.930 SYMLINK libspdk_nbd.so 00:02:21.930 SO libspdk_scsi.so.8.0 00:02:21.930 LIB libspdk_ublk.a 00:02:21.930 SYMLINK libspdk_scsi.so 00:02:21.930 SO libspdk_ublk.so.2.0 00:02:21.930 SYMLINK libspdk_ublk.so 00:02:21.930 CC lib/iscsi/conn.o 00:02:21.930 CC lib/vhost/vhost.o 00:02:21.930 CC lib/vhost/vhost_rpc.o 00:02:21.930 CC lib/iscsi/init_grp.o 00:02:21.930 CC lib/vhost/vhost_scsi.o 00:02:21.930 CC lib/iscsi/iscsi.o 00:02:21.930 CC lib/vhost/vhost_blk.o 00:02:21.930 CC lib/iscsi/md5.o 00:02:21.930 CC lib/vhost/rte_vhost_user.o 00:02:21.930 CC lib/iscsi/param.o 00:02:21.930 CC lib/iscsi/portal_grp.o 00:02:21.930 CC lib/iscsi/tgt_node.o 00:02:21.930 CC lib/iscsi/iscsi_subsystem.o 00:02:21.930 CC lib/iscsi/iscsi_rpc.o 00:02:21.930 CC lib/iscsi/task.o 00:02:22.497 LIB libspdk_ftl.a 00:02:22.497 SO libspdk_ftl.so.8.0 00:02:23.090 SYMLINK libspdk_ftl.so 00:02:23.348 LIB libspdk_vhost.a 00:02:23.348 SO libspdk_vhost.so.7.1 00:02:23.348 SYMLINK libspdk_vhost.so 00:02:23.348 LIB libspdk_iscsi.a 00:02:23.348 LIB libspdk_nvmf.a 00:02:23.607 SO libspdk_iscsi.so.7.0 00:02:23.607 SO libspdk_nvmf.so.17.0 00:02:23.607 SYMLINK libspdk_iscsi.so 00:02:23.607 SYMLINK libspdk_nvmf.so 00:02:23.865 CC module/env_dpdk/env_dpdk_rpc.o 00:02:23.865 CC module/blob/bdev/blob_bdev.o 00:02:23.865 CC module/accel/ioat/accel_ioat.o 00:02:23.865 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:23.865 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:23.865 CC module/accel/ioat/accel_ioat_rpc.o 00:02:23.865 CC module/scheduler/gscheduler/gscheduler.o 00:02:23.865 CC module/sock/posix/posix.o 00:02:23.865 CC module/accel/iaa/accel_iaa.o 00:02:23.865 CC module/accel/error/accel_error.o 00:02:23.865 CC module/accel/dsa/accel_dsa.o 00:02:23.865 CC module/accel/error/accel_error_rpc.o 00:02:23.865 CC module/accel/iaa/accel_iaa_rpc.o 00:02:23.865 CC module/accel/dsa/accel_dsa_rpc.o 00:02:23.865 LIB libspdk_env_dpdk_rpc.a 00:02:24.124 SO libspdk_env_dpdk_rpc.so.5.0 00:02:24.124 LIB libspdk_scheduler_gscheduler.a 00:02:24.124 LIB libspdk_scheduler_dpdk_governor.a 00:02:24.124 SYMLINK libspdk_env_dpdk_rpc.so 00:02:24.124 SO libspdk_scheduler_gscheduler.so.3.0 00:02:24.124 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:24.124 LIB libspdk_accel_error.a 00:02:24.124 LIB libspdk_accel_ioat.a 00:02:24.124 LIB libspdk_scheduler_dynamic.a 00:02:24.124 LIB libspdk_accel_iaa.a 00:02:24.124 SO libspdk_accel_error.so.1.0 00:02:24.124 SO libspdk_accel_ioat.so.5.0 00:02:24.124 SO libspdk_scheduler_dynamic.so.3.0 00:02:24.124 SYMLINK libspdk_scheduler_gscheduler.so 00:02:24.124 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:24.124 SO libspdk_accel_iaa.so.2.0 00:02:24.124 LIB libspdk_accel_dsa.a 00:02:24.124 LIB libspdk_blob_bdev.a 00:02:24.124 SYMLINK libspdk_accel_ioat.so 00:02:24.124 SYMLINK libspdk_accel_error.so 00:02:24.124 SYMLINK libspdk_scheduler_dynamic.so 00:02:24.124 SO libspdk_accel_dsa.so.4.0 00:02:24.124 SYMLINK libspdk_accel_iaa.so 00:02:24.124 SO libspdk_blob_bdev.so.10.1 00:02:24.124 SYMLINK libspdk_accel_dsa.so 00:02:24.384 SYMLINK libspdk_blob_bdev.so 00:02:24.384 CC module/bdev/delay/vbdev_delay.o 00:02:24.384 CC module/bdev/malloc/bdev_malloc.o 00:02:24.384 CC module/bdev/gpt/gpt.o 00:02:24.384 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:24.384 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:24.384 CC module/bdev/nvme/bdev_nvme.o 00:02:24.384 CC module/bdev/lvol/vbdev_lvol.o 00:02:24.384 CC module/bdev/passthru/vbdev_passthru.o 00:02:24.384 CC module/bdev/gpt/vbdev_gpt.o 00:02:24.384 CC module/bdev/split/vbdev_split.o 00:02:24.384 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:24.384 CC module/bdev/null/bdev_null.o 00:02:24.384 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:24.384 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:24.384 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:24.384 CC module/bdev/ftl/bdev_ftl.o 00:02:24.384 CC module/bdev/iscsi/bdev_iscsi.o 00:02:24.384 CC module/bdev/split/vbdev_split_rpc.o 00:02:24.384 CC module/bdev/aio/bdev_aio.o 00:02:24.384 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:24.384 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:24.384 CC module/bdev/nvme/nvme_rpc.o 00:02:24.384 CC module/bdev/null/bdev_null_rpc.o 00:02:24.384 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:24.384 CC module/bdev/error/vbdev_error.o 00:02:24.384 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:24.384 CC module/bdev/error/vbdev_error_rpc.o 00:02:24.384 CC module/bdev/raid/bdev_raid.o 00:02:24.384 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:24.384 CC module/bdev/nvme/bdev_mdns_client.o 00:02:24.384 CC module/bdev/raid/bdev_raid_rpc.o 00:02:24.384 CC module/bdev/nvme/vbdev_opal.o 00:02:24.384 CC module/bdev/aio/bdev_aio_rpc.o 00:02:24.384 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:24.384 CC module/bdev/raid/bdev_raid_sb.o 00:02:24.384 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:24.384 CC module/blobfs/bdev/blobfs_bdev.o 00:02:24.384 CC module/bdev/raid/raid0.o 00:02:24.384 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:24.384 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:24.384 CC module/bdev/raid/raid1.o 00:02:24.384 CC module/bdev/raid/concat.o 00:02:24.642 LIB libspdk_sock_posix.a 00:02:24.642 SO libspdk_sock_posix.so.5.0 00:02:24.900 LIB libspdk_blobfs_bdev.a 00:02:24.900 SO libspdk_blobfs_bdev.so.5.0 00:02:24.900 LIB libspdk_bdev_split.a 00:02:24.900 SYMLINK libspdk_sock_posix.so 00:02:24.900 SO libspdk_bdev_split.so.5.0 00:02:24.900 LIB libspdk_bdev_null.a 00:02:24.900 SYMLINK libspdk_blobfs_bdev.so 00:02:24.900 SO libspdk_bdev_null.so.5.0 00:02:24.901 LIB libspdk_bdev_delay.a 00:02:24.901 LIB libspdk_bdev_error.a 00:02:24.901 SYMLINK libspdk_bdev_split.so 00:02:24.901 LIB libspdk_bdev_passthru.a 00:02:24.901 LIB libspdk_bdev_gpt.a 00:02:24.901 LIB libspdk_bdev_ftl.a 00:02:24.901 SO libspdk_bdev_delay.so.5.0 00:02:24.901 SO libspdk_bdev_error.so.5.0 00:02:24.901 LIB libspdk_bdev_zone_block.a 00:02:24.901 SYMLINK libspdk_bdev_null.so 00:02:24.901 SO libspdk_bdev_gpt.so.5.0 00:02:24.901 SO libspdk_bdev_passthru.so.5.0 00:02:24.901 SO libspdk_bdev_ftl.so.5.0 00:02:24.901 SO libspdk_bdev_zone_block.so.5.0 00:02:24.901 LIB libspdk_bdev_iscsi.a 00:02:24.901 SYMLINK libspdk_bdev_delay.so 00:02:24.901 SYMLINK libspdk_bdev_error.so 00:02:24.901 LIB libspdk_bdev_lvol.a 00:02:24.901 LIB libspdk_bdev_malloc.a 00:02:24.901 SYMLINK libspdk_bdev_gpt.so 00:02:24.901 SYMLINK libspdk_bdev_ftl.so 00:02:24.901 SYMLINK libspdk_bdev_passthru.so 00:02:24.901 SO libspdk_bdev_iscsi.so.5.0 00:02:24.901 LIB libspdk_bdev_aio.a 00:02:24.901 SO libspdk_bdev_malloc.so.5.0 00:02:24.901 SO libspdk_bdev_lvol.so.5.0 00:02:25.159 SYMLINK libspdk_bdev_zone_block.so 00:02:25.159 SO libspdk_bdev_aio.so.5.0 00:02:25.159 SYMLINK libspdk_bdev_iscsi.so 00:02:25.159 SYMLINK libspdk_bdev_malloc.so 00:02:25.159 SYMLINK libspdk_bdev_lvol.so 00:02:25.159 SYMLINK libspdk_bdev_aio.so 00:02:25.159 LIB libspdk_bdev_virtio.a 00:02:25.159 SO libspdk_bdev_virtio.so.5.0 00:02:25.159 SYMLINK libspdk_bdev_virtio.so 00:02:25.417 LIB libspdk_bdev_raid.a 00:02:25.417 SO libspdk_bdev_raid.so.5.0 00:02:25.675 SYMLINK libspdk_bdev_raid.so 00:02:26.609 LIB libspdk_bdev_nvme.a 00:02:26.868 SO libspdk_bdev_nvme.so.6.0 00:02:26.868 SYMLINK libspdk_bdev_nvme.so 00:02:27.126 CC module/event/subsystems/iobuf/iobuf.o 00:02:27.126 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:27.126 CC module/event/subsystems/sock/sock.o 00:02:27.126 CC module/event/subsystems/scheduler/scheduler.o 00:02:27.126 CC module/event/subsystems/vmd/vmd.o 00:02:27.126 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:27.126 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:27.385 LIB libspdk_event_sock.a 00:02:27.385 LIB libspdk_event_vhost_blk.a 00:02:27.385 LIB libspdk_event_scheduler.a 00:02:27.385 LIB libspdk_event_vmd.a 00:02:27.385 SO libspdk_event_sock.so.4.0 00:02:27.385 SO libspdk_event_vhost_blk.so.2.0 00:02:27.385 LIB libspdk_event_iobuf.a 00:02:27.385 SO libspdk_event_scheduler.so.3.0 00:02:27.385 SO libspdk_event_vmd.so.5.0 00:02:27.385 SO libspdk_event_iobuf.so.2.0 00:02:27.385 SYMLINK libspdk_event_sock.so 00:02:27.385 SYMLINK libspdk_event_vhost_blk.so 00:02:27.385 SYMLINK libspdk_event_scheduler.so 00:02:27.385 SYMLINK libspdk_event_vmd.so 00:02:27.385 SYMLINK libspdk_event_iobuf.so 00:02:27.385 CC module/event/subsystems/accel/accel.o 00:02:27.643 LIB libspdk_event_accel.a 00:02:27.643 SO libspdk_event_accel.so.5.0 00:02:27.643 SYMLINK libspdk_event_accel.so 00:02:27.902 CC module/event/subsystems/bdev/bdev.o 00:02:27.902 LIB libspdk_event_bdev.a 00:02:27.902 SO libspdk_event_bdev.so.5.0 00:02:28.160 SYMLINK libspdk_event_bdev.so 00:02:28.160 CC module/event/subsystems/scsi/scsi.o 00:02:28.160 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:28.160 CC module/event/subsystems/nbd/nbd.o 00:02:28.160 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:28.160 CC module/event/subsystems/ublk/ublk.o 00:02:28.418 LIB libspdk_event_nbd.a 00:02:28.418 LIB libspdk_event_ublk.a 00:02:28.418 LIB libspdk_event_scsi.a 00:02:28.418 SO libspdk_event_ublk.so.2.0 00:02:28.418 SO libspdk_event_nbd.so.5.0 00:02:28.418 SO libspdk_event_scsi.so.5.0 00:02:28.418 SYMLINK libspdk_event_nbd.so 00:02:28.418 SYMLINK libspdk_event_ublk.so 00:02:28.418 LIB libspdk_event_nvmf.a 00:02:28.418 SYMLINK libspdk_event_scsi.so 00:02:28.418 SO libspdk_event_nvmf.so.5.0 00:02:28.418 SYMLINK libspdk_event_nvmf.so 00:02:28.418 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:28.418 CC module/event/subsystems/iscsi/iscsi.o 00:02:28.676 LIB libspdk_event_vhost_scsi.a 00:02:28.676 SO libspdk_event_vhost_scsi.so.2.0 00:02:28.676 LIB libspdk_event_iscsi.a 00:02:28.676 SO libspdk_event_iscsi.so.5.0 00:02:28.676 SYMLINK libspdk_event_vhost_scsi.so 00:02:28.676 SYMLINK libspdk_event_iscsi.so 00:02:28.940 SO libspdk.so.5.0 00:02:28.940 SYMLINK libspdk.so 00:02:28.940 CC app/trace_record/trace_record.o 00:02:28.940 CXX app/trace/trace.o 00:02:28.940 CC app/spdk_top/spdk_top.o 00:02:28.940 CC app/spdk_nvme_discover/discovery_aer.o 00:02:28.940 CC app/spdk_lspci/spdk_lspci.o 00:02:28.940 CC app/spdk_nvme_identify/identify.o 00:02:28.940 CC app/spdk_nvme_perf/perf.o 00:02:28.940 TEST_HEADER include/spdk/accel.h 00:02:28.940 TEST_HEADER include/spdk/accel_module.h 00:02:28.940 TEST_HEADER include/spdk/assert.h 00:02:28.940 CC test/rpc_client/rpc_client_test.o 00:02:28.940 TEST_HEADER include/spdk/barrier.h 00:02:28.940 TEST_HEADER include/spdk/base64.h 00:02:28.940 TEST_HEADER include/spdk/bdev.h 00:02:28.940 TEST_HEADER include/spdk/bdev_module.h 00:02:28.940 TEST_HEADER include/spdk/bdev_zone.h 00:02:28.940 TEST_HEADER include/spdk/bit_array.h 00:02:28.940 TEST_HEADER include/spdk/bit_pool.h 00:02:28.940 TEST_HEADER include/spdk/blob_bdev.h 00:02:28.940 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:28.940 TEST_HEADER include/spdk/blobfs.h 00:02:28.940 TEST_HEADER include/spdk/blob.h 00:02:28.940 TEST_HEADER include/spdk/conf.h 00:02:28.940 TEST_HEADER include/spdk/config.h 00:02:28.940 TEST_HEADER include/spdk/cpuset.h 00:02:28.940 CC app/spdk_dd/spdk_dd.o 00:02:28.940 TEST_HEADER include/spdk/crc16.h 00:02:29.201 TEST_HEADER include/spdk/crc32.h 00:02:29.201 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:29.201 TEST_HEADER include/spdk/crc64.h 00:02:29.201 CC app/nvmf_tgt/nvmf_main.o 00:02:29.201 CC app/iscsi_tgt/iscsi_tgt.o 00:02:29.201 TEST_HEADER include/spdk/dif.h 00:02:29.201 TEST_HEADER include/spdk/dma.h 00:02:29.201 TEST_HEADER include/spdk/endian.h 00:02:29.201 TEST_HEADER include/spdk/env_dpdk.h 00:02:29.201 CC app/vhost/vhost.o 00:02:29.201 TEST_HEADER include/spdk/env.h 00:02:29.201 CC examples/util/zipf/zipf.o 00:02:29.201 TEST_HEADER include/spdk/event.h 00:02:29.201 CC examples/ioat/verify/verify.o 00:02:29.201 CC examples/idxd/perf/perf.o 00:02:29.201 TEST_HEADER include/spdk/fd_group.h 00:02:29.201 CC examples/sock/hello_world/hello_sock.o 00:02:29.201 CC examples/vmd/lsvmd/lsvmd.o 00:02:29.201 CC examples/vmd/led/led.o 00:02:29.201 CC examples/nvme/reconnect/reconnect.o 00:02:29.201 CC app/fio/nvme/fio_plugin.o 00:02:29.201 TEST_HEADER include/spdk/fd.h 00:02:29.201 CC examples/ioat/perf/perf.o 00:02:29.201 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:29.201 CC examples/nvme/hotplug/hotplug.o 00:02:29.201 CC examples/nvme/abort/abort.o 00:02:29.201 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:29.201 TEST_HEADER include/spdk/file.h 00:02:29.201 CC examples/nvme/hello_world/hello_world.o 00:02:29.201 CC test/thread/poller_perf/poller_perf.o 00:02:29.201 TEST_HEADER include/spdk/ftl.h 00:02:29.201 CC examples/nvme/arbitration/arbitration.o 00:02:29.201 CC test/event/event_perf/event_perf.o 00:02:29.201 TEST_HEADER include/spdk/gpt_spec.h 00:02:29.201 CC examples/accel/perf/accel_perf.o 00:02:29.201 TEST_HEADER include/spdk/hexlify.h 00:02:29.201 TEST_HEADER include/spdk/histogram_data.h 00:02:29.201 CC app/spdk_tgt/spdk_tgt.o 00:02:29.201 TEST_HEADER include/spdk/idxd.h 00:02:29.201 CC test/nvme/aer/aer.o 00:02:29.201 TEST_HEADER include/spdk/idxd_spec.h 00:02:29.201 TEST_HEADER include/spdk/init.h 00:02:29.201 TEST_HEADER include/spdk/ioat.h 00:02:29.201 TEST_HEADER include/spdk/ioat_spec.h 00:02:29.201 TEST_HEADER include/spdk/iscsi_spec.h 00:02:29.201 TEST_HEADER include/spdk/json.h 00:02:29.201 CC examples/blob/cli/blobcli.o 00:02:29.201 TEST_HEADER include/spdk/jsonrpc.h 00:02:29.201 CC examples/bdev/bdevperf/bdevperf.o 00:02:29.201 TEST_HEADER include/spdk/likely.h 00:02:29.202 CC app/fio/bdev/fio_plugin.o 00:02:29.202 CC examples/thread/thread/thread_ex.o 00:02:29.202 TEST_HEADER include/spdk/log.h 00:02:29.202 TEST_HEADER include/spdk/lvol.h 00:02:29.202 CC examples/bdev/hello_world/hello_bdev.o 00:02:29.202 CC test/blobfs/mkfs/mkfs.o 00:02:29.202 CC examples/nvmf/nvmf/nvmf.o 00:02:29.202 TEST_HEADER include/spdk/memory.h 00:02:29.202 CC test/dma/test_dma/test_dma.o 00:02:29.202 TEST_HEADER include/spdk/mmio.h 00:02:29.202 CC examples/blob/hello_world/hello_blob.o 00:02:29.202 CC test/accel/dif/dif.o 00:02:29.202 CC test/bdev/bdevio/bdevio.o 00:02:29.202 TEST_HEADER include/spdk/nbd.h 00:02:29.202 CC test/app/bdev_svc/bdev_svc.o 00:02:29.202 TEST_HEADER include/spdk/notify.h 00:02:29.202 TEST_HEADER include/spdk/nvme.h 00:02:29.202 TEST_HEADER include/spdk/nvme_intel.h 00:02:29.202 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:29.202 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:29.202 TEST_HEADER include/spdk/nvme_spec.h 00:02:29.202 CC test/lvol/esnap/esnap.o 00:02:29.202 TEST_HEADER include/spdk/nvme_zns.h 00:02:29.202 CC test/env/mem_callbacks/mem_callbacks.o 00:02:29.202 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:29.202 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:29.202 TEST_HEADER include/spdk/nvmf.h 00:02:29.202 TEST_HEADER include/spdk/nvmf_spec.h 00:02:29.202 TEST_HEADER include/spdk/nvmf_transport.h 00:02:29.202 TEST_HEADER include/spdk/opal.h 00:02:29.202 TEST_HEADER include/spdk/opal_spec.h 00:02:29.202 TEST_HEADER include/spdk/pci_ids.h 00:02:29.202 TEST_HEADER include/spdk/pipe.h 00:02:29.202 TEST_HEADER include/spdk/queue.h 00:02:29.202 TEST_HEADER include/spdk/reduce.h 00:02:29.202 TEST_HEADER include/spdk/rpc.h 00:02:29.202 TEST_HEADER include/spdk/scheduler.h 00:02:29.202 TEST_HEADER include/spdk/scsi.h 00:02:29.202 TEST_HEADER include/spdk/scsi_spec.h 00:02:29.202 TEST_HEADER include/spdk/sock.h 00:02:29.202 TEST_HEADER include/spdk/stdinc.h 00:02:29.202 TEST_HEADER include/spdk/string.h 00:02:29.202 LINK spdk_lspci 00:02:29.202 TEST_HEADER include/spdk/thread.h 00:02:29.202 TEST_HEADER include/spdk/trace.h 00:02:29.202 TEST_HEADER include/spdk/trace_parser.h 00:02:29.202 TEST_HEADER include/spdk/tree.h 00:02:29.202 TEST_HEADER include/spdk/ublk.h 00:02:29.202 TEST_HEADER include/spdk/util.h 00:02:29.202 TEST_HEADER include/spdk/uuid.h 00:02:29.202 TEST_HEADER include/spdk/version.h 00:02:29.202 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:29.202 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:29.202 TEST_HEADER include/spdk/vhost.h 00:02:29.202 TEST_HEADER include/spdk/vmd.h 00:02:29.202 TEST_HEADER include/spdk/xor.h 00:02:29.202 TEST_HEADER include/spdk/zipf.h 00:02:29.202 CXX test/cpp_headers/accel.o 00:02:29.466 LINK lsvmd 00:02:29.466 LINK rpc_client_test 00:02:29.466 LINK spdk_nvme_discover 00:02:29.466 LINK zipf 00:02:29.466 LINK led 00:02:29.466 LINK event_perf 00:02:29.466 LINK poller_perf 00:02:29.466 LINK interrupt_tgt 00:02:29.466 LINK nvmf_tgt 00:02:29.466 LINK vhost 00:02:29.466 LINK cmb_copy 00:02:29.466 LINK iscsi_tgt 00:02:29.466 LINK spdk_trace_record 00:02:29.466 LINK verify 00:02:29.466 LINK ioat_perf 00:02:29.466 LINK spdk_tgt 00:02:29.466 LINK hello_world 00:02:29.466 LINK mkfs 00:02:29.466 LINK bdev_svc 00:02:29.467 LINK hotplug 00:02:29.467 LINK hello_sock 00:02:29.731 LINK hello_bdev 00:02:29.731 LINK thread 00:02:29.731 LINK aer 00:02:29.731 CXX test/cpp_headers/accel_module.o 00:02:29.731 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:29.731 LINK hello_blob 00:02:29.731 LINK nvmf 00:02:29.731 LINK idxd_perf 00:02:29.731 LINK arbitration 00:02:29.731 LINK spdk_dd 00:02:29.731 CC test/event/reactor/reactor.o 00:02:29.731 LINK reconnect 00:02:29.731 CXX test/cpp_headers/assert.o 00:02:29.731 CC test/nvme/reset/reset.o 00:02:29.731 CC test/env/vtophys/vtophys.o 00:02:29.731 LINK abort 00:02:29.731 CC test/app/histogram_perf/histogram_perf.o 00:02:29.731 LINK spdk_trace 00:02:29.731 CXX test/cpp_headers/barrier.o 00:02:29.731 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:29.731 CC test/env/memory/memory_ut.o 00:02:29.990 CC test/env/pci/pci_ut.o 00:02:29.991 LINK test_dma 00:02:29.991 LINK bdevio 00:02:29.991 LINK dif 00:02:29.991 CC test/app/jsoncat/jsoncat.o 00:02:29.991 CC test/nvme/sgl/sgl.o 00:02:29.991 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:29.991 CXX test/cpp_headers/base64.o 00:02:29.991 CXX test/cpp_headers/bdev.o 00:02:29.991 CC test/app/stub/stub.o 00:02:29.991 LINK accel_perf 00:02:29.991 CC test/nvme/e2edp/nvme_dp.o 00:02:29.991 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:29.991 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:29.991 CC test/event/reactor_perf/reactor_perf.o 00:02:29.991 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:29.991 CXX test/cpp_headers/bdev_module.o 00:02:29.991 CC test/event/app_repeat/app_repeat.o 00:02:29.991 LINK nvme_manage 00:02:29.991 LINK pmr_persistence 00:02:29.991 LINK reactor 00:02:29.991 LINK blobcli 00:02:29.991 LINK vtophys 00:02:29.991 CXX test/cpp_headers/bdev_zone.o 00:02:29.991 LINK histogram_perf 00:02:29.991 CC test/event/scheduler/scheduler.o 00:02:29.991 CC test/nvme/overhead/overhead.o 00:02:30.262 LINK spdk_bdev 00:02:30.262 LINK spdk_nvme 00:02:30.262 CC test/nvme/err_injection/err_injection.o 00:02:30.262 CXX test/cpp_headers/bit_array.o 00:02:30.262 CC test/nvme/startup/startup.o 00:02:30.262 LINK jsoncat 00:02:30.262 LINK env_dpdk_post_init 00:02:30.262 CC test/nvme/reserve/reserve.o 00:02:30.262 CC test/nvme/simple_copy/simple_copy.o 00:02:30.262 CC test/nvme/connect_stress/connect_stress.o 00:02:30.262 CC test/nvme/boot_partition/boot_partition.o 00:02:30.262 CXX test/cpp_headers/bit_pool.o 00:02:30.262 CC test/nvme/fused_ordering/fused_ordering.o 00:02:30.262 CC test/nvme/compliance/nvme_compliance.o 00:02:30.262 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:30.262 CXX test/cpp_headers/blob_bdev.o 00:02:30.262 CXX test/cpp_headers/blobfs_bdev.o 00:02:30.262 LINK reset 00:02:30.262 LINK reactor_perf 00:02:30.262 CXX test/cpp_headers/blobfs.o 00:02:30.262 LINK stub 00:02:30.262 CXX test/cpp_headers/blob.o 00:02:30.262 LINK app_repeat 00:02:30.262 CXX test/cpp_headers/conf.o 00:02:30.262 CXX test/cpp_headers/config.o 00:02:30.262 CXX test/cpp_headers/cpuset.o 00:02:30.262 CXX test/cpp_headers/crc16.o 00:02:30.262 CC test/nvme/fdp/fdp.o 00:02:30.262 CXX test/cpp_headers/crc32.o 00:02:30.526 CXX test/cpp_headers/crc64.o 00:02:30.526 CC test/nvme/cuse/cuse.o 00:02:30.526 LINK mem_callbacks 00:02:30.526 CXX test/cpp_headers/dma.o 00:02:30.526 CXX test/cpp_headers/dif.o 00:02:30.526 CXX test/cpp_headers/endian.o 00:02:30.526 LINK sgl 00:02:30.526 CXX test/cpp_headers/env_dpdk.o 00:02:30.526 CXX test/cpp_headers/env.o 00:02:30.526 LINK spdk_nvme_perf 00:02:30.526 LINK err_injection 00:02:30.526 LINK nvme_dp 00:02:30.526 CXX test/cpp_headers/event.o 00:02:30.526 LINK startup 00:02:30.526 CXX test/cpp_headers/fd_group.o 00:02:30.526 CXX test/cpp_headers/fd.o 00:02:30.526 LINK scheduler 00:02:30.526 LINK connect_stress 00:02:30.526 CXX test/cpp_headers/file.o 00:02:30.526 LINK boot_partition 00:02:30.526 LINK bdevperf 00:02:30.526 LINK reserve 00:02:30.526 LINK spdk_nvme_identify 00:02:30.526 LINK spdk_top 00:02:30.526 CXX test/cpp_headers/ftl.o 00:02:30.526 LINK pci_ut 00:02:30.526 CXX test/cpp_headers/gpt_spec.o 00:02:30.526 CXX test/cpp_headers/hexlify.o 00:02:30.526 CXX test/cpp_headers/histogram_data.o 00:02:30.790 CXX test/cpp_headers/idxd.o 00:02:30.790 LINK doorbell_aers 00:02:30.790 CXX test/cpp_headers/idxd_spec.o 00:02:30.790 LINK overhead 00:02:30.790 LINK fused_ordering 00:02:30.790 LINK simple_copy 00:02:30.790 CXX test/cpp_headers/init.o 00:02:30.790 LINK nvme_fuzz 00:02:30.790 CXX test/cpp_headers/ioat.o 00:02:30.790 CXX test/cpp_headers/ioat_spec.o 00:02:30.790 CXX test/cpp_headers/iscsi_spec.o 00:02:30.790 CXX test/cpp_headers/json.o 00:02:30.790 CXX test/cpp_headers/jsonrpc.o 00:02:30.790 CXX test/cpp_headers/likely.o 00:02:30.790 CXX test/cpp_headers/log.o 00:02:30.790 CXX test/cpp_headers/lvol.o 00:02:30.790 CXX test/cpp_headers/memory.o 00:02:30.790 LINK vhost_fuzz 00:02:30.790 CXX test/cpp_headers/mmio.o 00:02:30.790 CXX test/cpp_headers/nbd.o 00:02:30.790 CXX test/cpp_headers/notify.o 00:02:30.790 CXX test/cpp_headers/nvme.o 00:02:30.790 CXX test/cpp_headers/nvme_intel.o 00:02:30.790 CXX test/cpp_headers/nvme_ocssd.o 00:02:30.790 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:30.790 CXX test/cpp_headers/nvme_spec.o 00:02:30.790 CXX test/cpp_headers/nvme_zns.o 00:02:30.790 CXX test/cpp_headers/nvmf_cmd.o 00:02:30.790 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:30.790 CXX test/cpp_headers/nvmf.o 00:02:30.790 CXX test/cpp_headers/nvmf_spec.o 00:02:30.790 LINK nvme_compliance 00:02:30.790 CXX test/cpp_headers/nvmf_transport.o 00:02:30.790 CXX test/cpp_headers/opal.o 00:02:31.052 CXX test/cpp_headers/opal_spec.o 00:02:31.052 CXX test/cpp_headers/pci_ids.o 00:02:31.052 CXX test/cpp_headers/pipe.o 00:02:31.052 CXX test/cpp_headers/queue.o 00:02:31.052 CXX test/cpp_headers/reduce.o 00:02:31.052 LINK fdp 00:02:31.052 CXX test/cpp_headers/rpc.o 00:02:31.052 CXX test/cpp_headers/scheduler.o 00:02:31.052 CXX test/cpp_headers/scsi.o 00:02:31.052 CXX test/cpp_headers/scsi_spec.o 00:02:31.053 CXX test/cpp_headers/sock.o 00:02:31.053 CXX test/cpp_headers/stdinc.o 00:02:31.053 CXX test/cpp_headers/string.o 00:02:31.053 CXX test/cpp_headers/thread.o 00:02:31.053 CXX test/cpp_headers/trace.o 00:02:31.053 CXX test/cpp_headers/trace_parser.o 00:02:31.053 CXX test/cpp_headers/tree.o 00:02:31.053 CXX test/cpp_headers/ublk.o 00:02:31.053 CXX test/cpp_headers/util.o 00:02:31.053 CXX test/cpp_headers/uuid.o 00:02:31.053 CXX test/cpp_headers/version.o 00:02:31.053 CXX test/cpp_headers/vfio_user_pci.o 00:02:31.053 CXX test/cpp_headers/vfio_user_spec.o 00:02:31.053 CXX test/cpp_headers/vhost.o 00:02:31.053 CXX test/cpp_headers/vmd.o 00:02:31.053 CXX test/cpp_headers/xor.o 00:02:31.053 CXX test/cpp_headers/zipf.o 00:02:31.311 LINK memory_ut 00:02:31.878 LINK cuse 00:02:32.136 LINK iscsi_fuzz 00:02:34.666 LINK esnap 00:02:34.666 00:02:34.666 real 0m45.355s 00:02:34.666 user 9m36.870s 00:02:34.666 sys 2m9.478s 00:02:34.666 06:39:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:34.666 06:39:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.666 ************************************ 00:02:34.666 END TEST make 00:02:34.666 ************************************ 00:02:34.666 06:39:48 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:34.666 06:39:48 -- nvmf/common.sh@7 -- # uname -s 00:02:34.666 06:39:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:34.666 06:39:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:34.666 06:39:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:34.666 06:39:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:34.666 06:39:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:34.666 06:39:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:34.666 06:39:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:34.666 06:39:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:34.666 06:39:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:34.666 06:39:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:34.666 06:39:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:34.666 06:39:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:34.666 06:39:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:34.666 06:39:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:34.666 06:39:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:34.666 06:39:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:34.666 06:39:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:34.666 06:39:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:34.666 06:39:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:34.666 06:39:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.666 06:39:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.667 06:39:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.667 06:39:48 -- paths/export.sh@5 -- # export PATH 00:02:34.667 06:39:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:34.667 06:39:48 -- nvmf/common.sh@46 -- # : 0 00:02:34.667 06:39:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:34.667 06:39:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:34.667 06:39:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:34.667 06:39:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:34.667 06:39:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:34.667 06:39:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:34.667 06:39:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:34.667 06:39:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:34.667 06:39:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:34.667 06:39:48 -- spdk/autotest.sh@32 -- # uname -s 00:02:34.667 06:39:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:34.667 06:39:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:34.667 06:39:48 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.925 06:39:48 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:34.925 06:39:48 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:34.925 06:39:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:34.925 06:39:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:34.925 06:39:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:34.925 06:39:48 -- spdk/autotest.sh@48 -- # udevadm_pid=339521 00:02:34.925 06:39:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:34.925 06:39:48 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:34.925 06:39:48 -- spdk/autotest.sh@54 -- # echo 339523 00:02:34.925 06:39:48 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:34.925 06:39:48 -- spdk/autotest.sh@56 -- # echo 339524 00:02:34.925 06:39:48 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:34.925 06:39:48 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:34.925 06:39:48 -- spdk/autotest.sh@60 -- # echo 339525 00:02:34.925 06:39:48 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:34.925 06:39:48 -- spdk/autotest.sh@62 -- # echo 339526 00:02:34.925 06:39:48 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:34.925 06:39:48 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:34.925 06:39:48 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:34.925 06:39:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:34.925 06:39:48 -- common/autotest_common.sh@10 -- # set +x 00:02:34.925 06:39:48 -- spdk/autotest.sh@70 -- # create_test_list 00:02:34.925 06:39:48 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:34.925 06:39:48 -- common/autotest_common.sh@10 -- # set +x 00:02:34.925 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:34.925 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:34.925 06:39:48 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:34.925 06:39:48 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.925 06:39:48 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.925 06:39:48 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:34.925 06:39:48 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:34.925 06:39:48 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:34.925 06:39:48 -- common/autotest_common.sh@1440 -- # uname 00:02:34.925 06:39:48 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:34.925 06:39:48 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:34.925 06:39:48 -- common/autotest_common.sh@1460 -- # uname 00:02:34.925 06:39:48 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:34.925 06:39:48 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:34.925 06:39:48 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:34.925 06:39:48 -- spdk/autotest.sh@83 -- # hash lcov 00:02:34.925 06:39:48 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:34.925 06:39:48 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:34.925 --rc lcov_branch_coverage=1 00:02:34.925 --rc lcov_function_coverage=1 00:02:34.925 --rc genhtml_branch_coverage=1 00:02:34.925 --rc genhtml_function_coverage=1 00:02:34.925 --rc genhtml_legend=1 00:02:34.925 --rc geninfo_all_blocks=1 00:02:34.925 ' 00:02:34.925 06:39:48 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:34.925 --rc lcov_branch_coverage=1 00:02:34.925 --rc lcov_function_coverage=1 00:02:34.925 --rc genhtml_branch_coverage=1 00:02:34.925 --rc genhtml_function_coverage=1 00:02:34.925 --rc genhtml_legend=1 00:02:34.925 --rc geninfo_all_blocks=1 00:02:34.925 ' 00:02:34.925 06:39:48 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:34.925 --rc lcov_branch_coverage=1 00:02:34.925 --rc lcov_function_coverage=1 00:02:34.925 --rc genhtml_branch_coverage=1 00:02:34.925 --rc genhtml_function_coverage=1 00:02:34.925 --rc genhtml_legend=1 00:02:34.925 --rc geninfo_all_blocks=1 00:02:34.925 --no-external' 00:02:34.925 06:39:48 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:34.925 --rc lcov_branch_coverage=1 00:02:34.925 --rc lcov_function_coverage=1 00:02:34.925 --rc genhtml_branch_coverage=1 00:02:34.925 --rc genhtml_function_coverage=1 00:02:34.925 --rc genhtml_legend=1 00:02:34.925 --rc geninfo_all_blocks=1 00:02:34.925 --no-external' 00:02:34.925 06:39:48 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:34.925 lcov: LCOV version 1.14 00:02:34.925 06:39:49 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:49.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:49.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:49.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:49.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:49.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:49.838 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:04.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:04.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:04.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:04.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:04.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:04.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:04.718 06:40:18 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:04.718 06:40:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:04.718 06:40:18 -- common/autotest_common.sh@10 -- # set +x 00:03:04.718 06:40:18 -- spdk/autotest.sh@102 -- # rm -f 00:03:04.718 06:40:18 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.091 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:06.091 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:06.091 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:06.349 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:06.349 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:06.349 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:06.349 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:06.349 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:06.349 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:06.349 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:06.349 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:06.349 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:06.349 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:06.349 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:06.349 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:06.349 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:06.349 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:06.349 06:40:20 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:06.349 06:40:20 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:06.349 06:40:20 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:06.349 06:40:20 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:06.349 06:40:20 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:06.349 06:40:20 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:06.349 06:40:20 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:06.349 06:40:20 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.349 06:40:20 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:06.349 06:40:20 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:06.349 06:40:20 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:06.349 06:40:20 -- spdk/autotest.sh@121 -- # grep -v p 00:03:06.349 06:40:20 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:06.349 06:40:20 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:06.349 06:40:20 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:06.349 06:40:20 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:06.349 06:40:20 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:06.607 No valid GPT data, bailing 00:03:06.607 06:40:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.607 06:40:20 -- scripts/common.sh@393 -- # pt= 00:03:06.607 06:40:20 -- scripts/common.sh@394 -- # return 1 00:03:06.607 06:40:20 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:06.607 1+0 records in 00:03:06.607 1+0 records out 00:03:06.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00274497 s, 382 MB/s 00:03:06.607 06:40:20 -- spdk/autotest.sh@129 -- # sync 00:03:06.607 06:40:20 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:06.607 06:40:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:06.607 06:40:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:08.507 06:40:22 -- spdk/autotest.sh@135 -- # uname -s 00:03:08.507 06:40:22 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:08.507 06:40:22 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:08.507 06:40:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.507 06:40:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.507 06:40:22 -- common/autotest_common.sh@10 -- # set +x 00:03:08.507 ************************************ 00:03:08.507 START TEST setup.sh 00:03:08.507 ************************************ 00:03:08.507 06:40:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:08.507 * Looking for test storage... 00:03:08.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:08.507 06:40:22 -- setup/test-setup.sh@10 -- # uname -s 00:03:08.507 06:40:22 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:08.507 06:40:22 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:08.507 06:40:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.507 06:40:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.507 06:40:22 -- common/autotest_common.sh@10 -- # set +x 00:03:08.507 ************************************ 00:03:08.507 START TEST acl 00:03:08.507 ************************************ 00:03:08.507 06:40:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:08.507 * Looking for test storage... 00:03:08.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:08.507 06:40:22 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:08.507 06:40:22 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:08.507 06:40:22 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:08.507 06:40:22 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:08.507 06:40:22 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:08.507 06:40:22 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:08.507 06:40:22 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:08.507 06:40:22 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.507 06:40:22 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:08.507 06:40:22 -- setup/acl.sh@12 -- # devs=() 00:03:08.507 06:40:22 -- setup/acl.sh@12 -- # declare -a devs 00:03:08.507 06:40:22 -- setup/acl.sh@13 -- # drivers=() 00:03:08.507 06:40:22 -- setup/acl.sh@13 -- # declare -A drivers 00:03:08.507 06:40:22 -- setup/acl.sh@51 -- # setup reset 00:03:08.507 06:40:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.507 06:40:22 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.879 06:40:24 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:09.879 06:40:24 -- setup/acl.sh@16 -- # local dev driver 00:03:09.879 06:40:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.879 06:40:24 -- setup/acl.sh@15 -- # setup output status 00:03:09.879 06:40:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.879 06:40:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:11.253 Hugepages 00:03:11.253 node hugesize free / total 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # continue 00:03:11.253 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # continue 00:03:11.253 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # continue 00:03:11.253 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.253 00:03:11.253 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # continue 00:03:11.253 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.253 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.253 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.253 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.253 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.253 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.253 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.253 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.254 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.254 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:11.254 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.254 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.254 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.254 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:11.254 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.254 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.254 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.254 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:11.254 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.254 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.254 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.254 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:11.254 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.254 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.254 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.254 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:11.254 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.254 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.254 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.512 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:11.512 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.512 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.512 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.512 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:11.512 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.512 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.512 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.512 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:11.512 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.512 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.512 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.512 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:11.512 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.512 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.512 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.512 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:11.512 06:40:25 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.512 06:40:25 -- setup/acl.sh@20 -- # continue 00:03:11.512 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.512 06:40:25 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:11.512 06:40:25 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:11.512 06:40:25 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:11.512 06:40:25 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:11.512 06:40:25 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:11.512 06:40:25 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.512 06:40:25 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:11.512 06:40:25 -- setup/acl.sh@54 -- # run_test denied denied 00:03:11.512 06:40:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:11.512 06:40:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:11.512 06:40:25 -- common/autotest_common.sh@10 -- # set +x 00:03:11.512 ************************************ 00:03:11.512 START TEST denied 00:03:11.512 ************************************ 00:03:11.512 06:40:25 -- common/autotest_common.sh@1104 -- # denied 00:03:11.512 06:40:25 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:11.512 06:40:25 -- setup/acl.sh@38 -- # setup output config 00:03:11.512 06:40:25 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:11.512 06:40:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.512 06:40:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.885 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:12.885 06:40:27 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:12.885 06:40:27 -- setup/acl.sh@28 -- # local dev driver 00:03:12.885 06:40:27 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:12.885 06:40:27 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:12.885 06:40:27 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:12.885 06:40:27 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:12.885 06:40:27 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:12.885 06:40:27 -- setup/acl.sh@41 -- # setup reset 00:03:12.885 06:40:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.885 06:40:27 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.447 00:03:15.447 real 0m3.905s 00:03:15.447 user 0m1.174s 00:03:15.447 sys 0m1.888s 00:03:15.447 06:40:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.447 06:40:29 -- common/autotest_common.sh@10 -- # set +x 00:03:15.447 ************************************ 00:03:15.447 END TEST denied 00:03:15.447 ************************************ 00:03:15.447 06:40:29 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:15.447 06:40:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:15.447 06:40:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.447 06:40:29 -- common/autotest_common.sh@10 -- # set +x 00:03:15.447 ************************************ 00:03:15.447 START TEST allowed 00:03:15.447 ************************************ 00:03:15.447 06:40:29 -- common/autotest_common.sh@1104 -- # allowed 00:03:15.447 06:40:29 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:15.447 06:40:29 -- setup/acl.sh@45 -- # setup output config 00:03:15.447 06:40:29 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:15.447 06:40:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.447 06:40:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.972 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:17.972 06:40:32 -- setup/acl.sh@47 -- # verify 00:03:17.972 06:40:32 -- setup/acl.sh@28 -- # local dev driver 00:03:17.972 06:40:32 -- setup/acl.sh@48 -- # setup reset 00:03:17.972 06:40:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.972 06:40:32 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.871 00:03:19.871 real 0m4.333s 00:03:19.871 user 0m1.237s 00:03:19.871 sys 0m2.029s 00:03:19.871 06:40:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.871 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:03:19.871 ************************************ 00:03:19.871 END TEST allowed 00:03:19.871 ************************************ 00:03:19.871 00:03:19.871 real 0m11.380s 00:03:19.871 user 0m3.639s 00:03:19.871 sys 0m5.936s 00:03:19.871 06:40:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.871 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:03:19.871 ************************************ 00:03:19.871 END TEST acl 00:03:19.871 ************************************ 00:03:19.871 06:40:33 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:19.871 06:40:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:19.871 06:40:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:19.871 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:03:19.871 ************************************ 00:03:19.871 START TEST hugepages 00:03:19.871 ************************************ 00:03:19.871 06:40:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:19.871 * Looking for test storage... 00:03:19.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:19.871 06:40:33 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:19.871 06:40:33 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:19.871 06:40:33 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:19.871 06:40:33 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:19.871 06:40:33 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:19.871 06:40:33 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:19.871 06:40:33 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:19.871 06:40:33 -- setup/common.sh@18 -- # local node= 00:03:19.871 06:40:33 -- setup/common.sh@19 -- # local var val 00:03:19.871 06:40:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.871 06:40:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.871 06:40:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.871 06:40:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.871 06:40:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.871 06:40:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.871 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.871 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35544028 kB' 'MemAvailable: 40286704 kB' 'Buffers: 2696 kB' 'Cached: 18350260 kB' 'SwapCached: 0 kB' 'Active: 14335244 kB' 'Inactive: 4480652 kB' 'Active(anon): 13700112 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 466764 kB' 'Mapped: 216944 kB' 'Shmem: 13237172 kB' 'KReclaimable: 241032 kB' 'Slab: 634012 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392980 kB' 'KernelStack: 13008 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 14827736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198860 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.872 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.872 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # continue 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.873 06:40:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.873 06:40:33 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:19.873 06:40:33 -- setup/common.sh@33 -- # echo 2048 00:03:19.873 06:40:33 -- setup/common.sh@33 -- # return 0 00:03:19.873 06:40:33 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:19.873 06:40:33 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:19.873 06:40:33 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:19.873 06:40:33 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:19.873 06:40:33 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:19.873 06:40:33 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:19.873 06:40:33 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:19.873 06:40:33 -- setup/hugepages.sh@207 -- # get_nodes 00:03:19.873 06:40:33 -- setup/hugepages.sh@27 -- # local node 00:03:19.873 06:40:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.873 06:40:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:19.873 06:40:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.873 06:40:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:19.873 06:40:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.873 06:40:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.873 06:40:33 -- setup/hugepages.sh@208 -- # clear_hp 00:03:19.873 06:40:33 -- setup/hugepages.sh@37 -- # local node hp 00:03:19.873 06:40:33 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:19.873 06:40:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.873 06:40:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:19.873 06:40:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.873 06:40:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:19.873 06:40:33 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:19.873 06:40:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.873 06:40:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:19.873 06:40:33 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.873 06:40:33 -- setup/hugepages.sh@41 -- # echo 0 00:03:19.873 06:40:33 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:19.873 06:40:33 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:19.873 06:40:33 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:19.873 06:40:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:19.873 06:40:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:19.873 06:40:33 -- common/autotest_common.sh@10 -- # set +x 00:03:19.873 ************************************ 00:03:19.873 START TEST default_setup 00:03:19.873 ************************************ 00:03:19.873 06:40:33 -- common/autotest_common.sh@1104 -- # default_setup 00:03:19.873 06:40:33 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:19.873 06:40:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:19.873 06:40:33 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:19.873 06:40:33 -- setup/hugepages.sh@51 -- # shift 00:03:19.873 06:40:33 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:19.873 06:40:33 -- setup/hugepages.sh@52 -- # local node_ids 00:03:19.873 06:40:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.873 06:40:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:19.873 06:40:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:19.873 06:40:33 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:19.873 06:40:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.873 06:40:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:19.873 06:40:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.873 06:40:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.873 06:40:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.873 06:40:33 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:19.873 06:40:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:19.873 06:40:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:19.873 06:40:33 -- setup/hugepages.sh@73 -- # return 0 00:03:19.873 06:40:33 -- setup/hugepages.sh@137 -- # setup output 00:03:19.873 06:40:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.873 06:40:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.246 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:21.246 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:21.246 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:21.246 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:21.246 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:21.246 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:21.246 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:21.246 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:21.246 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:21.246 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:21.246 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:21.246 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:21.246 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:21.246 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:21.246 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:21.246 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:22.182 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.444 06:40:36 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:22.444 06:40:36 -- setup/hugepages.sh@89 -- # local node 00:03:22.444 06:40:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.444 06:40:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.444 06:40:36 -- setup/hugepages.sh@92 -- # local surp 00:03:22.444 06:40:36 -- setup/hugepages.sh@93 -- # local resv 00:03:22.444 06:40:36 -- setup/hugepages.sh@94 -- # local anon 00:03:22.444 06:40:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.444 06:40:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.444 06:40:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.444 06:40:36 -- setup/common.sh@18 -- # local node= 00:03:22.444 06:40:36 -- setup/common.sh@19 -- # local var val 00:03:22.444 06:40:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.444 06:40:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.444 06:40:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.444 06:40:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.444 06:40:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.444 06:40:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37662916 kB' 'MemAvailable: 42405592 kB' 'Buffers: 2696 kB' 'Cached: 18350348 kB' 'SwapCached: 0 kB' 'Active: 14355328 kB' 'Inactive: 4480652 kB' 'Active(anon): 13720196 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486816 kB' 'Mapped: 216992 kB' 'Shmem: 13237260 kB' 'KReclaimable: 241032 kB' 'Slab: 633536 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392504 kB' 'KernelStack: 13360 kB' 'PageTables: 9916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14851124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199260 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.444 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.444 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.445 06:40:36 -- setup/common.sh@33 -- # echo 0 00:03:22.445 06:40:36 -- setup/common.sh@33 -- # return 0 00:03:22.445 06:40:36 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.445 06:40:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.445 06:40:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.445 06:40:36 -- setup/common.sh@18 -- # local node= 00:03:22.445 06:40:36 -- setup/common.sh@19 -- # local var val 00:03:22.445 06:40:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.445 06:40:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.445 06:40:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.445 06:40:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.445 06:40:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.445 06:40:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37664916 kB' 'MemAvailable: 42407592 kB' 'Buffers: 2696 kB' 'Cached: 18350352 kB' 'SwapCached: 0 kB' 'Active: 14355888 kB' 'Inactive: 4480652 kB' 'Active(anon): 13720756 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486700 kB' 'Mapped: 217000 kB' 'Shmem: 13237264 kB' 'KReclaimable: 241032 kB' 'Slab: 633536 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392504 kB' 'KernelStack: 13296 kB' 'PageTables: 9944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14851136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199228 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.445 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.445 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.446 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.446 06:40:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.446 06:40:36 -- setup/common.sh@33 -- # echo 0 00:03:22.446 06:40:36 -- setup/common.sh@33 -- # return 0 00:03:22.446 06:40:36 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.446 06:40:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.446 06:40:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.446 06:40:36 -- setup/common.sh@18 -- # local node= 00:03:22.446 06:40:36 -- setup/common.sh@19 -- # local var val 00:03:22.447 06:40:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.447 06:40:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.447 06:40:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.447 06:40:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.447 06:40:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.447 06:40:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37667080 kB' 'MemAvailable: 42409756 kB' 'Buffers: 2696 kB' 'Cached: 18350364 kB' 'SwapCached: 0 kB' 'Active: 14355128 kB' 'Inactive: 4480652 kB' 'Active(anon): 13719996 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485936 kB' 'Mapped: 216992 kB' 'Shmem: 13237276 kB' 'KReclaimable: 241032 kB' 'Slab: 633576 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392544 kB' 'KernelStack: 13216 kB' 'PageTables: 10228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14851152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199228 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.447 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.447 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.448 06:40:36 -- setup/common.sh@33 -- # echo 0 00:03:22.448 06:40:36 -- setup/common.sh@33 -- # return 0 00:03:22.448 06:40:36 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.448 06:40:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.448 nr_hugepages=1024 00:03:22.448 06:40:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.448 resv_hugepages=0 00:03:22.448 06:40:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.448 surplus_hugepages=0 00:03:22.448 06:40:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.448 anon_hugepages=0 00:03:22.448 06:40:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.448 06:40:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.448 06:40:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.448 06:40:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.448 06:40:36 -- setup/common.sh@18 -- # local node= 00:03:22.448 06:40:36 -- setup/common.sh@19 -- # local var val 00:03:22.448 06:40:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.448 06:40:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.448 06:40:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.448 06:40:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.448 06:40:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.448 06:40:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37667672 kB' 'MemAvailable: 42410348 kB' 'Buffers: 2696 kB' 'Cached: 18350364 kB' 'SwapCached: 0 kB' 'Active: 14353896 kB' 'Inactive: 4480652 kB' 'Active(anon): 13718764 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484760 kB' 'Mapped: 217000 kB' 'Shmem: 13237276 kB' 'KReclaimable: 241032 kB' 'Slab: 633576 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392544 kB' 'KernelStack: 12944 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14847376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.448 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.448 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.449 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.449 06:40:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.449 06:40:36 -- setup/common.sh@33 -- # echo 1024 00:03:22.449 06:40:36 -- setup/common.sh@33 -- # return 0 00:03:22.449 06:40:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.449 06:40:36 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.449 06:40:36 -- setup/hugepages.sh@27 -- # local node 00:03:22.449 06:40:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.450 06:40:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.450 06:40:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.450 06:40:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.450 06:40:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.450 06:40:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.450 06:40:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.450 06:40:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.450 06:40:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.450 06:40:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.450 06:40:36 -- setup/common.sh@18 -- # local node=0 00:03:22.450 06:40:36 -- setup/common.sh@19 -- # local var val 00:03:22.450 06:40:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.450 06:40:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.450 06:40:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.450 06:40:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.450 06:40:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.450 06:40:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20725240 kB' 'MemUsed: 12104644 kB' 'SwapCached: 0 kB' 'Active: 8583860 kB' 'Inactive: 192856 kB' 'Active(anon): 8153468 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 192856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8523084 kB' 'Mapped: 113584 kB' 'AnonPages: 256804 kB' 'Shmem: 7899836 kB' 'KernelStack: 7528 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117736 kB' 'Slab: 339532 kB' 'SReclaimable: 117736 kB' 'SUnreclaim: 221796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 06:40:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.451 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 06:40:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 06:40:36 -- setup/common.sh@32 -- # continue 00:03:22.451 06:40:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 06:40:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 06:40:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 06:40:36 -- setup/common.sh@33 -- # echo 0 00:03:22.451 06:40:36 -- setup/common.sh@33 -- # return 0 00:03:22.451 06:40:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.451 06:40:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.451 06:40:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.451 06:40:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.451 06:40:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.451 node0=1024 expecting 1024 00:03:22.451 06:40:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.451 00:03:22.451 real 0m2.646s 00:03:22.451 user 0m0.703s 00:03:22.451 sys 0m0.942s 00:03:22.451 06:40:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.451 06:40:36 -- common/autotest_common.sh@10 -- # set +x 00:03:22.451 ************************************ 00:03:22.451 END TEST default_setup 00:03:22.451 ************************************ 00:03:22.451 06:40:36 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:22.451 06:40:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:22.451 06:40:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:22.451 06:40:36 -- common/autotest_common.sh@10 -- # set +x 00:03:22.451 ************************************ 00:03:22.451 START TEST per_node_1G_alloc 00:03:22.451 ************************************ 00:03:22.451 06:40:36 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:22.451 06:40:36 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:22.451 06:40:36 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:22.451 06:40:36 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:22.451 06:40:36 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:22.451 06:40:36 -- setup/hugepages.sh@51 -- # shift 00:03:22.451 06:40:36 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:22.451 06:40:36 -- setup/hugepages.sh@52 -- # local node_ids 00:03:22.451 06:40:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.451 06:40:36 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:22.451 06:40:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:22.451 06:40:36 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:22.451 06:40:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.451 06:40:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:22.451 06:40:36 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.451 06:40:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.451 06:40:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.451 06:40:36 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:22.451 06:40:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.451 06:40:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.451 06:40:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:22.451 06:40:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:22.451 06:40:36 -- setup/hugepages.sh@73 -- # return 0 00:03:22.451 06:40:36 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:22.451 06:40:36 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:22.451 06:40:36 -- setup/hugepages.sh@146 -- # setup output 00:03:22.451 06:40:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.451 06:40:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.825 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.825 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.825 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.825 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.825 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.825 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.825 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.825 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.825 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.825 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.825 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.825 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.825 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.825 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.825 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.825 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.825 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:24.088 06:40:38 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:24.088 06:40:38 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:24.088 06:40:38 -- setup/hugepages.sh@89 -- # local node 00:03:24.088 06:40:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.088 06:40:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.088 06:40:38 -- setup/hugepages.sh@92 -- # local surp 00:03:24.088 06:40:38 -- setup/hugepages.sh@93 -- # local resv 00:03:24.088 06:40:38 -- setup/hugepages.sh@94 -- # local anon 00:03:24.088 06:40:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.088 06:40:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.088 06:40:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.088 06:40:38 -- setup/common.sh@18 -- # local node= 00:03:24.088 06:40:38 -- setup/common.sh@19 -- # local var val 00:03:24.088 06:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.088 06:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.088 06:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.088 06:40:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.088 06:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.088 06:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37658384 kB' 'MemAvailable: 42401060 kB' 'Buffers: 2696 kB' 'Cached: 18350428 kB' 'SwapCached: 0 kB' 'Active: 14354044 kB' 'Inactive: 4480652 kB' 'Active(anon): 13718912 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484840 kB' 'Mapped: 217008 kB' 'Shmem: 13237340 kB' 'KReclaimable: 241032 kB' 'Slab: 633608 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392576 kB' 'KernelStack: 13056 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14847212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199052 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.088 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.088 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.089 06:40:38 -- setup/common.sh@33 -- # echo 0 00:03:24.089 06:40:38 -- setup/common.sh@33 -- # return 0 00:03:24.089 06:40:38 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.089 06:40:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.089 06:40:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.089 06:40:38 -- setup/common.sh@18 -- # local node= 00:03:24.089 06:40:38 -- setup/common.sh@19 -- # local var val 00:03:24.089 06:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.089 06:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.089 06:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.089 06:40:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.089 06:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.089 06:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37661392 kB' 'MemAvailable: 42404068 kB' 'Buffers: 2696 kB' 'Cached: 18350432 kB' 'SwapCached: 0 kB' 'Active: 14354036 kB' 'Inactive: 4480652 kB' 'Active(anon): 13718904 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484864 kB' 'Mapped: 217024 kB' 'Shmem: 13237344 kB' 'KReclaimable: 241032 kB' 'Slab: 633664 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392632 kB' 'KernelStack: 12960 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14847228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.089 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.089 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.090 06:40:38 -- setup/common.sh@33 -- # echo 0 00:03:24.090 06:40:38 -- setup/common.sh@33 -- # return 0 00:03:24.090 06:40:38 -- setup/hugepages.sh@99 -- # surp=0 00:03:24.090 06:40:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.090 06:40:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.090 06:40:38 -- setup/common.sh@18 -- # local node= 00:03:24.090 06:40:38 -- setup/common.sh@19 -- # local var val 00:03:24.090 06:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.090 06:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.090 06:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.090 06:40:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.090 06:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.090 06:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37661080 kB' 'MemAvailable: 42403756 kB' 'Buffers: 2696 kB' 'Cached: 18350456 kB' 'SwapCached: 0 kB' 'Active: 14353028 kB' 'Inactive: 4480652 kB' 'Active(anon): 13717896 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483752 kB' 'Mapped: 216988 kB' 'Shmem: 13237368 kB' 'KReclaimable: 241032 kB' 'Slab: 633568 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392536 kB' 'KernelStack: 12960 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14847368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.090 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.090 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.091 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.091 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.092 06:40:38 -- setup/common.sh@33 -- # echo 0 00:03:24.092 06:40:38 -- setup/common.sh@33 -- # return 0 00:03:24.092 06:40:38 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.092 06:40:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:24.092 nr_hugepages=1024 00:03:24.092 06:40:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.092 resv_hugepages=0 00:03:24.092 06:40:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.092 surplus_hugepages=0 00:03:24.092 06:40:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.092 anon_hugepages=0 00:03:24.092 06:40:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.092 06:40:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:24.092 06:40:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.092 06:40:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.092 06:40:38 -- setup/common.sh@18 -- # local node= 00:03:24.092 06:40:38 -- setup/common.sh@19 -- # local var val 00:03:24.092 06:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.092 06:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.092 06:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.092 06:40:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.092 06:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.092 06:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37661424 kB' 'MemAvailable: 42404100 kB' 'Buffers: 2696 kB' 'Cached: 18350460 kB' 'SwapCached: 0 kB' 'Active: 14353744 kB' 'Inactive: 4480652 kB' 'Active(anon): 13718612 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484500 kB' 'Mapped: 216988 kB' 'Shmem: 13237372 kB' 'KReclaimable: 241032 kB' 'Slab: 633568 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392536 kB' 'KernelStack: 13008 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14847752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.092 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.092 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.093 06:40:38 -- setup/common.sh@33 -- # echo 1024 00:03:24.093 06:40:38 -- setup/common.sh@33 -- # return 0 00:03:24.093 06:40:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:24.093 06:40:38 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.093 06:40:38 -- setup/hugepages.sh@27 -- # local node 00:03:24.093 06:40:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.093 06:40:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.093 06:40:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.093 06:40:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.093 06:40:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.093 06:40:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.093 06:40:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.093 06:40:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.093 06:40:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.093 06:40:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.093 06:40:38 -- setup/common.sh@18 -- # local node=0 00:03:24.093 06:40:38 -- setup/common.sh@19 -- # local var val 00:03:24.093 06:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.093 06:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.093 06:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.093 06:40:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.093 06:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.093 06:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21761480 kB' 'MemUsed: 11068404 kB' 'SwapCached: 0 kB' 'Active: 8583376 kB' 'Inactive: 192856 kB' 'Active(anon): 8152984 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 192856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8523088 kB' 'Mapped: 113592 kB' 'AnonPages: 256292 kB' 'Shmem: 7899840 kB' 'KernelStack: 7512 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117736 kB' 'Slab: 339588 kB' 'SReclaimable: 117736 kB' 'SUnreclaim: 221852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.093 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.093 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@33 -- # echo 0 00:03:24.094 06:40:38 -- setup/common.sh@33 -- # return 0 00:03:24.094 06:40:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.094 06:40:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.094 06:40:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.094 06:40:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.094 06:40:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.094 06:40:38 -- setup/common.sh@18 -- # local node=1 00:03:24.094 06:40:38 -- setup/common.sh@19 -- # local var val 00:03:24.094 06:40:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.094 06:40:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.094 06:40:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.094 06:40:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.094 06:40:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.094 06:40:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15901012 kB' 'MemUsed: 11810832 kB' 'SwapCached: 0 kB' 'Active: 5770492 kB' 'Inactive: 4287796 kB' 'Active(anon): 5565752 kB' 'Inactive(anon): 0 kB' 'Active(file): 204740 kB' 'Inactive(file): 4287796 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9830096 kB' 'Mapped: 103396 kB' 'AnonPages: 228324 kB' 'Shmem: 5337560 kB' 'KernelStack: 5496 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123296 kB' 'Slab: 293980 kB' 'SReclaimable: 123296 kB' 'SUnreclaim: 170684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.094 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.094 06:40:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # continue 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.095 06:40:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.095 06:40:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.095 06:40:38 -- setup/common.sh@33 -- # echo 0 00:03:24.095 06:40:38 -- setup/common.sh@33 -- # return 0 00:03:24.095 06:40:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.095 06:40:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.095 06:40:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.095 06:40:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.095 06:40:38 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.095 node0=512 expecting 512 00:03:24.095 06:40:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.095 06:40:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.095 06:40:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.095 06:40:38 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:24.095 node1=512 expecting 512 00:03:24.095 06:40:38 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:24.095 00:03:24.095 real 0m1.681s 00:03:24.095 user 0m0.677s 00:03:24.095 sys 0m0.973s 00:03:24.095 06:40:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.095 06:40:38 -- common/autotest_common.sh@10 -- # set +x 00:03:24.095 ************************************ 00:03:24.095 END TEST per_node_1G_alloc 00:03:24.095 ************************************ 00:03:24.354 06:40:38 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:24.354 06:40:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:24.354 06:40:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:24.354 06:40:38 -- common/autotest_common.sh@10 -- # set +x 00:03:24.354 ************************************ 00:03:24.354 START TEST even_2G_alloc 00:03:24.354 ************************************ 00:03:24.354 06:40:38 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:24.354 06:40:38 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:24.354 06:40:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.354 06:40:38 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:24.354 06:40:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.354 06:40:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.354 06:40:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:24.354 06:40:38 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:24.354 06:40:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.354 06:40:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.354 06:40:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.354 06:40:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.354 06:40:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.354 06:40:38 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:24.354 06:40:38 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:24.354 06:40:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.354 06:40:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.354 06:40:38 -- setup/hugepages.sh@83 -- # : 512 00:03:24.354 06:40:38 -- setup/hugepages.sh@84 -- # : 1 00:03:24.354 06:40:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.354 06:40:38 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:24.354 06:40:38 -- setup/hugepages.sh@83 -- # : 0 00:03:24.354 06:40:38 -- setup/hugepages.sh@84 -- # : 0 00:03:24.354 06:40:38 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:24.354 06:40:38 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:24.354 06:40:38 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:24.354 06:40:38 -- setup/hugepages.sh@153 -- # setup output 00:03:24.354 06:40:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.354 06:40:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.734 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.734 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.734 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.734 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.734 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.734 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.734 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.734 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.734 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.734 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.734 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.734 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.734 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.734 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.734 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.734 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.734 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.734 06:40:39 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:25.734 06:40:39 -- setup/hugepages.sh@89 -- # local node 00:03:25.734 06:40:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.734 06:40:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.734 06:40:39 -- setup/hugepages.sh@92 -- # local surp 00:03:25.734 06:40:39 -- setup/hugepages.sh@93 -- # local resv 00:03:25.734 06:40:39 -- setup/hugepages.sh@94 -- # local anon 00:03:25.734 06:40:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.734 06:40:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.734 06:40:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.734 06:40:39 -- setup/common.sh@18 -- # local node= 00:03:25.734 06:40:39 -- setup/common.sh@19 -- # local var val 00:03:25.734 06:40:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.734 06:40:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.734 06:40:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.734 06:40:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.734 06:40:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.734 06:40:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37675440 kB' 'MemAvailable: 42418116 kB' 'Buffers: 2696 kB' 'Cached: 18350528 kB' 'SwapCached: 0 kB' 'Active: 14347852 kB' 'Inactive: 4480652 kB' 'Active(anon): 13712720 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478384 kB' 'Mapped: 216176 kB' 'Shmem: 13237440 kB' 'KReclaimable: 241032 kB' 'Slab: 633132 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392100 kB' 'KernelStack: 12896 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14823736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.734 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.734 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.735 06:40:39 -- setup/common.sh@33 -- # echo 0 00:03:25.735 06:40:39 -- setup/common.sh@33 -- # return 0 00:03:25.735 06:40:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:25.735 06:40:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.735 06:40:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.735 06:40:39 -- setup/common.sh@18 -- # local node= 00:03:25.735 06:40:39 -- setup/common.sh@19 -- # local var val 00:03:25.735 06:40:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.735 06:40:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.735 06:40:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.735 06:40:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.735 06:40:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.735 06:40:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37680396 kB' 'MemAvailable: 42423072 kB' 'Buffers: 2696 kB' 'Cached: 18350532 kB' 'SwapCached: 0 kB' 'Active: 14348344 kB' 'Inactive: 4480652 kB' 'Active(anon): 13713212 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478936 kB' 'Mapped: 216176 kB' 'Shmem: 13237444 kB' 'KReclaimable: 241032 kB' 'Slab: 633116 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392084 kB' 'KernelStack: 12912 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14827180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.735 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.735 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.736 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.736 06:40:39 -- setup/common.sh@33 -- # echo 0 00:03:25.736 06:40:39 -- setup/common.sh@33 -- # return 0 00:03:25.736 06:40:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:25.736 06:40:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.736 06:40:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.736 06:40:39 -- setup/common.sh@18 -- # local node= 00:03:25.736 06:40:39 -- setup/common.sh@19 -- # local var val 00:03:25.736 06:40:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.736 06:40:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.736 06:40:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.736 06:40:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.736 06:40:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.736 06:40:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.736 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37680144 kB' 'MemAvailable: 42422820 kB' 'Buffers: 2696 kB' 'Cached: 18350540 kB' 'SwapCached: 0 kB' 'Active: 14348444 kB' 'Inactive: 4480652 kB' 'Active(anon): 13713312 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479060 kB' 'Mapped: 216168 kB' 'Shmem: 13237452 kB' 'KReclaimable: 241032 kB' 'Slab: 633108 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392076 kB' 'KernelStack: 13072 kB' 'PageTables: 8944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14827556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199116 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.737 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.737 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.738 06:40:39 -- setup/common.sh@33 -- # echo 0 00:03:25.738 06:40:39 -- setup/common.sh@33 -- # return 0 00:03:25.738 06:40:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:25.738 06:40:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:25.738 nr_hugepages=1024 00:03:25.738 06:40:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.738 resv_hugepages=0 00:03:25.738 06:40:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.738 surplus_hugepages=0 00:03:25.738 06:40:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.738 anon_hugepages=0 00:03:25.738 06:40:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.738 06:40:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:25.738 06:40:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.738 06:40:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.738 06:40:39 -- setup/common.sh@18 -- # local node= 00:03:25.738 06:40:39 -- setup/common.sh@19 -- # local var val 00:03:25.738 06:40:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.738 06:40:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.738 06:40:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.738 06:40:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.738 06:40:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.738 06:40:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37680092 kB' 'MemAvailable: 42422768 kB' 'Buffers: 2696 kB' 'Cached: 18350552 kB' 'SwapCached: 0 kB' 'Active: 14349008 kB' 'Inactive: 4480652 kB' 'Active(anon): 13713876 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479616 kB' 'Mapped: 216168 kB' 'Shmem: 13237464 kB' 'KReclaimable: 241032 kB' 'Slab: 633108 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392076 kB' 'KernelStack: 13264 kB' 'PageTables: 9828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14826304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199212 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.738 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.738 06:40:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.739 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.739 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.739 06:40:39 -- setup/common.sh@33 -- # echo 1024 00:03:25.739 06:40:39 -- setup/common.sh@33 -- # return 0 00:03:25.739 06:40:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:25.739 06:40:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.739 06:40:39 -- setup/hugepages.sh@27 -- # local node 00:03:25.739 06:40:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.739 06:40:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.739 06:40:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.739 06:40:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.739 06:40:39 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.739 06:40:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.739 06:40:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.739 06:40:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.739 06:40:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.739 06:40:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.739 06:40:39 -- setup/common.sh@18 -- # local node=0 00:03:25.740 06:40:39 -- setup/common.sh@19 -- # local var val 00:03:25.740 06:40:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.740 06:40:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.740 06:40:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.740 06:40:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.740 06:40:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.740 06:40:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21776536 kB' 'MemUsed: 11053348 kB' 'SwapCached: 0 kB' 'Active: 8581176 kB' 'Inactive: 192856 kB' 'Active(anon): 8150784 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 192856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8523096 kB' 'Mapped: 112832 kB' 'AnonPages: 254100 kB' 'Shmem: 7899848 kB' 'KernelStack: 7800 kB' 'PageTables: 5316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117736 kB' 'Slab: 339364 kB' 'SReclaimable: 117736 kB' 'SUnreclaim: 221628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.740 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.740 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.740 06:40:39 -- setup/common.sh@33 -- # echo 0 00:03:25.740 06:40:39 -- setup/common.sh@33 -- # return 0 00:03:25.740 06:40:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.740 06:40:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.741 06:40:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.741 06:40:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.741 06:40:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.741 06:40:39 -- setup/common.sh@18 -- # local node=1 00:03:25.741 06:40:39 -- setup/common.sh@19 -- # local var val 00:03:25.741 06:40:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:25.741 06:40:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.741 06:40:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.741 06:40:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.741 06:40:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.741 06:40:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15904812 kB' 'MemUsed: 11807032 kB' 'SwapCached: 0 kB' 'Active: 5768096 kB' 'Inactive: 4287796 kB' 'Active(anon): 5563356 kB' 'Inactive(anon): 0 kB' 'Active(file): 204740 kB' 'Inactive(file): 4287796 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9830184 kB' 'Mapped: 103336 kB' 'AnonPages: 225808 kB' 'Shmem: 5337648 kB' 'KernelStack: 5512 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123296 kB' 'Slab: 293744 kB' 'SReclaimable: 123296 kB' 'SUnreclaim: 170448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.741 06:40:39 -- setup/common.sh@32 -- # continue 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:25.741 06:40:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:25.742 06:40:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.742 06:40:39 -- setup/common.sh@33 -- # echo 0 00:03:25.742 06:40:39 -- setup/common.sh@33 -- # return 0 00:03:25.742 06:40:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.742 06:40:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.742 06:40:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.742 06:40:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.742 06:40:39 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:25.742 node0=512 expecting 512 00:03:25.742 06:40:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.742 06:40:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.742 06:40:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.742 06:40:39 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:25.742 node1=512 expecting 512 00:03:25.742 06:40:39 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:25.742 00:03:25.742 real 0m1.599s 00:03:25.742 user 0m0.673s 00:03:25.742 sys 0m0.892s 00:03:25.742 06:40:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.742 06:40:39 -- common/autotest_common.sh@10 -- # set +x 00:03:25.742 ************************************ 00:03:25.742 END TEST even_2G_alloc 00:03:25.742 ************************************ 00:03:25.742 06:40:39 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:25.742 06:40:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:25.742 06:40:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:25.742 06:40:39 -- common/autotest_common.sh@10 -- # set +x 00:03:26.000 ************************************ 00:03:26.000 START TEST odd_alloc 00:03:26.000 ************************************ 00:03:26.000 06:40:39 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:26.000 06:40:39 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:26.000 06:40:39 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:26.000 06:40:39 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.000 06:40:39 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.000 06:40:39 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:26.001 06:40:39 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.001 06:40:39 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.001 06:40:39 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.001 06:40:39 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:26.001 06:40:39 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.001 06:40:39 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.001 06:40:39 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.001 06:40:39 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.001 06:40:39 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.001 06:40:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.001 06:40:39 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.001 06:40:39 -- setup/hugepages.sh@83 -- # : 513 00:03:26.001 06:40:39 -- setup/hugepages.sh@84 -- # : 1 00:03:26.001 06:40:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.001 06:40:39 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:26.001 06:40:39 -- setup/hugepages.sh@83 -- # : 0 00:03:26.001 06:40:39 -- setup/hugepages.sh@84 -- # : 0 00:03:26.001 06:40:39 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.001 06:40:39 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:26.001 06:40:39 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:26.001 06:40:39 -- setup/hugepages.sh@160 -- # setup output 00:03:26.001 06:40:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.001 06:40:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.381 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.382 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.382 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.382 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.382 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.382 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.382 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.382 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.382 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.382 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.382 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.382 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.382 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.382 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.382 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.382 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.382 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.382 06:40:41 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:27.382 06:40:41 -- setup/hugepages.sh@89 -- # local node 00:03:27.382 06:40:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.382 06:40:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.382 06:40:41 -- setup/hugepages.sh@92 -- # local surp 00:03:27.382 06:40:41 -- setup/hugepages.sh@93 -- # local resv 00:03:27.382 06:40:41 -- setup/hugepages.sh@94 -- # local anon 00:03:27.382 06:40:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.382 06:40:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.382 06:40:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.382 06:40:41 -- setup/common.sh@18 -- # local node= 00:03:27.382 06:40:41 -- setup/common.sh@19 -- # local var val 00:03:27.382 06:40:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.382 06:40:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.382 06:40:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.382 06:40:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.382 06:40:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.382 06:40:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37672076 kB' 'MemAvailable: 42414752 kB' 'Buffers: 2696 kB' 'Cached: 18350620 kB' 'SwapCached: 0 kB' 'Active: 14353744 kB' 'Inactive: 4480652 kB' 'Active(anon): 13718612 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484320 kB' 'Mapped: 216948 kB' 'Shmem: 13237532 kB' 'KReclaimable: 241032 kB' 'Slab: 633132 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392100 kB' 'KernelStack: 12864 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14829712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.382 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.382 06:40:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.383 06:40:41 -- setup/common.sh@33 -- # echo 0 00:03:27.383 06:40:41 -- setup/common.sh@33 -- # return 0 00:03:27.383 06:40:41 -- setup/hugepages.sh@97 -- # anon=0 00:03:27.383 06:40:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.383 06:40:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.383 06:40:41 -- setup/common.sh@18 -- # local node= 00:03:27.383 06:40:41 -- setup/common.sh@19 -- # local var val 00:03:27.383 06:40:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.383 06:40:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.383 06:40:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.383 06:40:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.383 06:40:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.383 06:40:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37678372 kB' 'MemAvailable: 42421048 kB' 'Buffers: 2696 kB' 'Cached: 18350624 kB' 'SwapCached: 0 kB' 'Active: 14353440 kB' 'Inactive: 4480652 kB' 'Active(anon): 13718308 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483964 kB' 'Mapped: 217012 kB' 'Shmem: 13237536 kB' 'KReclaimable: 241032 kB' 'Slab: 633116 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392084 kB' 'KernelStack: 12816 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14829724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198944 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.383 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.383 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.384 06:40:41 -- setup/common.sh@33 -- # echo 0 00:03:27.384 06:40:41 -- setup/common.sh@33 -- # return 0 00:03:27.384 06:40:41 -- setup/hugepages.sh@99 -- # surp=0 00:03:27.384 06:40:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.384 06:40:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.384 06:40:41 -- setup/common.sh@18 -- # local node= 00:03:27.384 06:40:41 -- setup/common.sh@19 -- # local var val 00:03:27.384 06:40:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.384 06:40:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.384 06:40:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.384 06:40:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.384 06:40:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.384 06:40:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37682628 kB' 'MemAvailable: 42425304 kB' 'Buffers: 2696 kB' 'Cached: 18350632 kB' 'SwapCached: 0 kB' 'Active: 14347784 kB' 'Inactive: 4480652 kB' 'Active(anon): 13712652 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478056 kB' 'Mapped: 216560 kB' 'Shmem: 13237544 kB' 'KReclaimable: 241032 kB' 'Slab: 633108 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392076 kB' 'KernelStack: 12960 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14824120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.384 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.384 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.385 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.385 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.386 06:40:41 -- setup/common.sh@33 -- # echo 0 00:03:27.386 06:40:41 -- setup/common.sh@33 -- # return 0 00:03:27.386 06:40:41 -- setup/hugepages.sh@100 -- # resv=0 00:03:27.386 06:40:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:27.386 nr_hugepages=1025 00:03:27.386 06:40:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.386 resv_hugepages=0 00:03:27.386 06:40:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.386 surplus_hugepages=0 00:03:27.386 06:40:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.386 anon_hugepages=0 00:03:27.386 06:40:41 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.386 06:40:41 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:27.386 06:40:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.386 06:40:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.386 06:40:41 -- setup/common.sh@18 -- # local node= 00:03:27.386 06:40:41 -- setup/common.sh@19 -- # local var val 00:03:27.386 06:40:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.386 06:40:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.386 06:40:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.386 06:40:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.386 06:40:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.386 06:40:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37682944 kB' 'MemAvailable: 42425620 kB' 'Buffers: 2696 kB' 'Cached: 18350652 kB' 'SwapCached: 0 kB' 'Active: 14347456 kB' 'Inactive: 4480652 kB' 'Active(anon): 13712324 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478060 kB' 'Mapped: 216152 kB' 'Shmem: 13237564 kB' 'KReclaimable: 241032 kB' 'Slab: 633104 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392072 kB' 'KernelStack: 12928 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14824136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198892 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.386 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.386 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.387 06:40:41 -- setup/common.sh@33 -- # echo 1025 00:03:27.387 06:40:41 -- setup/common.sh@33 -- # return 0 00:03:27.387 06:40:41 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.387 06:40:41 -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.387 06:40:41 -- setup/hugepages.sh@27 -- # local node 00:03:27.387 06:40:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.387 06:40:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.387 06:40:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.387 06:40:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:27.387 06:40:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.387 06:40:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.387 06:40:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.387 06:40:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.387 06:40:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.387 06:40:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.387 06:40:41 -- setup/common.sh@18 -- # local node=0 00:03:27.387 06:40:41 -- setup/common.sh@19 -- # local var val 00:03:27.387 06:40:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.387 06:40:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.387 06:40:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.387 06:40:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.387 06:40:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.387 06:40:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21781584 kB' 'MemUsed: 11048300 kB' 'SwapCached: 0 kB' 'Active: 8579572 kB' 'Inactive: 192856 kB' 'Active(anon): 8149180 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 192856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8523104 kB' 'Mapped: 112836 kB' 'AnonPages: 252504 kB' 'Shmem: 7899856 kB' 'KernelStack: 7432 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117736 kB' 'Slab: 339320 kB' 'SReclaimable: 117736 kB' 'SUnreclaim: 221584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.387 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.387 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@33 -- # echo 0 00:03:27.388 06:40:41 -- setup/common.sh@33 -- # return 0 00:03:27.388 06:40:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.388 06:40:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.388 06:40:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.388 06:40:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:27.388 06:40:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.388 06:40:41 -- setup/common.sh@18 -- # local node=1 00:03:27.388 06:40:41 -- setup/common.sh@19 -- # local var val 00:03:27.388 06:40:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.388 06:40:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.388 06:40:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:27.388 06:40:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:27.388 06:40:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.388 06:40:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 15901108 kB' 'MemUsed: 11810736 kB' 'SwapCached: 0 kB' 'Active: 5767920 kB' 'Inactive: 4287796 kB' 'Active(anon): 5563180 kB' 'Inactive(anon): 0 kB' 'Active(file): 204740 kB' 'Inactive(file): 4287796 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9830272 kB' 'Mapped: 103316 kB' 'AnonPages: 225560 kB' 'Shmem: 5337736 kB' 'KernelStack: 5496 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123296 kB' 'Slab: 293784 kB' 'SReclaimable: 123296 kB' 'SUnreclaim: 170488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.388 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.388 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # continue 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.389 06:40:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.389 06:40:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.389 06:40:41 -- setup/common.sh@33 -- # echo 0 00:03:27.389 06:40:41 -- setup/common.sh@33 -- # return 0 00:03:27.389 06:40:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.389 06:40:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.389 06:40:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.389 06:40:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.389 06:40:41 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:27.389 node0=512 expecting 513 00:03:27.389 06:40:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.389 06:40:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.389 06:40:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.389 06:40:41 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:27.389 node1=513 expecting 512 00:03:27.389 06:40:41 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:27.389 00:03:27.389 real 0m1.639s 00:03:27.389 user 0m0.690s 00:03:27.389 sys 0m0.921s 00:03:27.389 06:40:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.389 06:40:41 -- common/autotest_common.sh@10 -- # set +x 00:03:27.389 ************************************ 00:03:27.389 END TEST odd_alloc 00:03:27.389 ************************************ 00:03:27.648 06:40:41 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:27.648 06:40:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:27.648 06:40:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:27.648 06:40:41 -- common/autotest_common.sh@10 -- # set +x 00:03:27.648 ************************************ 00:03:27.648 START TEST custom_alloc 00:03:27.648 ************************************ 00:03:27.648 06:40:41 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:27.648 06:40:41 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:27.648 06:40:41 -- setup/hugepages.sh@169 -- # local node 00:03:27.648 06:40:41 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:27.648 06:40:41 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:27.648 06:40:41 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:27.648 06:40:41 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:27.648 06:40:41 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:27.648 06:40:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:27.648 06:40:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.648 06:40:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.648 06:40:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.648 06:40:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:27.648 06:40:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.648 06:40:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.648 06:40:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.648 06:40:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:27.648 06:40:41 -- setup/hugepages.sh@83 -- # : 256 00:03:27.648 06:40:41 -- setup/hugepages.sh@84 -- # : 1 00:03:27.648 06:40:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:27.648 06:40:41 -- setup/hugepages.sh@83 -- # : 0 00:03:27.648 06:40:41 -- setup/hugepages.sh@84 -- # : 0 00:03:27.648 06:40:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:27.648 06:40:41 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:27.648 06:40:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.648 06:40:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.648 06:40:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.648 06:40:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.648 06:40:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.648 06:40:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.648 06:40:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.648 06:40:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.648 06:40:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.648 06:40:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.648 06:40:41 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:27.648 06:40:41 -- setup/hugepages.sh@78 -- # return 0 00:03:27.648 06:40:41 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:27.648 06:40:41 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:27.648 06:40:41 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:27.648 06:40:41 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:27.648 06:40:41 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:27.648 06:40:41 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:27.648 06:40:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:27.648 06:40:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.648 06:40:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.648 06:40:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:27.648 06:40:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.648 06:40:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.648 06:40:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:27.648 06:40:41 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.648 06:40:41 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:27.648 06:40:41 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:27.648 06:40:41 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:27.648 06:40:41 -- setup/hugepages.sh@78 -- # return 0 00:03:27.648 06:40:41 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:27.648 06:40:41 -- setup/hugepages.sh@187 -- # setup output 00:03:27.648 06:40:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.648 06:40:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.029 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.029 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:29.029 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:29.029 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:29.029 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:29.029 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:29.029 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:29.029 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:29.029 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:29.029 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:29.029 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:29.029 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:29.029 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:29.029 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:29.029 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:29.029 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:29.029 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:29.029 06:40:42 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:29.029 06:40:42 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:29.029 06:40:42 -- setup/hugepages.sh@89 -- # local node 00:03:29.029 06:40:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.029 06:40:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.029 06:40:42 -- setup/hugepages.sh@92 -- # local surp 00:03:29.029 06:40:42 -- setup/hugepages.sh@93 -- # local resv 00:03:29.029 06:40:42 -- setup/hugepages.sh@94 -- # local anon 00:03:29.029 06:40:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.029 06:40:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.029 06:40:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.029 06:40:42 -- setup/common.sh@18 -- # local node= 00:03:29.029 06:40:42 -- setup/common.sh@19 -- # local var val 00:03:29.029 06:40:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.029 06:40:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.029 06:40:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.029 06:40:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.029 06:40:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.029 06:40:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36618040 kB' 'MemAvailable: 41360716 kB' 'Buffers: 2696 kB' 'Cached: 18350716 kB' 'SwapCached: 0 kB' 'Active: 14348040 kB' 'Inactive: 4480652 kB' 'Active(anon): 13712908 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478484 kB' 'Mapped: 216156 kB' 'Shmem: 13237628 kB' 'KReclaimable: 241032 kB' 'Slab: 633188 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392156 kB' 'KernelStack: 12912 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14824184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:42 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.029 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.029 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.029 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.029 06:40:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.030 06:40:43 -- setup/common.sh@33 -- # echo 0 00:03:29.030 06:40:43 -- setup/common.sh@33 -- # return 0 00:03:29.030 06:40:43 -- setup/hugepages.sh@97 -- # anon=0 00:03:29.030 06:40:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.030 06:40:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.030 06:40:43 -- setup/common.sh@18 -- # local node= 00:03:29.030 06:40:43 -- setup/common.sh@19 -- # local var val 00:03:29.030 06:40:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.030 06:40:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.030 06:40:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.030 06:40:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.030 06:40:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.030 06:40:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36618292 kB' 'MemAvailable: 41360968 kB' 'Buffers: 2696 kB' 'Cached: 18350720 kB' 'SwapCached: 0 kB' 'Active: 14348388 kB' 'Inactive: 4480652 kB' 'Active(anon): 13713256 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478824 kB' 'Mapped: 216156 kB' 'Shmem: 13237632 kB' 'KReclaimable: 241032 kB' 'Slab: 633172 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392140 kB' 'KernelStack: 12912 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14824196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.030 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.030 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.031 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.031 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.032 06:40:43 -- setup/common.sh@33 -- # echo 0 00:03:29.032 06:40:43 -- setup/common.sh@33 -- # return 0 00:03:29.032 06:40:43 -- setup/hugepages.sh@99 -- # surp=0 00:03:29.032 06:40:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.032 06:40:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.032 06:40:43 -- setup/common.sh@18 -- # local node= 00:03:29.032 06:40:43 -- setup/common.sh@19 -- # local var val 00:03:29.032 06:40:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.032 06:40:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.032 06:40:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.032 06:40:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.032 06:40:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.032 06:40:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36618544 kB' 'MemAvailable: 41361220 kB' 'Buffers: 2696 kB' 'Cached: 18350724 kB' 'SwapCached: 0 kB' 'Active: 14347636 kB' 'Inactive: 4480652 kB' 'Active(anon): 13712504 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478060 kB' 'Mapped: 216148 kB' 'Shmem: 13237636 kB' 'KReclaimable: 241032 kB' 'Slab: 633180 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392148 kB' 'KernelStack: 12944 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14824212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.032 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.032 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.033 06:40:43 -- setup/common.sh@33 -- # echo 0 00:03:29.033 06:40:43 -- setup/common.sh@33 -- # return 0 00:03:29.033 06:40:43 -- setup/hugepages.sh@100 -- # resv=0 00:03:29.033 06:40:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:29.033 nr_hugepages=1536 00:03:29.033 06:40:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.033 resv_hugepages=0 00:03:29.033 06:40:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.033 surplus_hugepages=0 00:03:29.033 06:40:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.033 anon_hugepages=0 00:03:29.033 06:40:43 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:29.033 06:40:43 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:29.033 06:40:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.033 06:40:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.033 06:40:43 -- setup/common.sh@18 -- # local node= 00:03:29.033 06:40:43 -- setup/common.sh@19 -- # local var val 00:03:29.033 06:40:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.033 06:40:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.033 06:40:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.033 06:40:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.033 06:40:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.033 06:40:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36618364 kB' 'MemAvailable: 41361040 kB' 'Buffers: 2696 kB' 'Cached: 18350744 kB' 'SwapCached: 0 kB' 'Active: 14347980 kB' 'Inactive: 4480652 kB' 'Active(anon): 13712848 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478380 kB' 'Mapped: 216148 kB' 'Shmem: 13237656 kB' 'KReclaimable: 241032 kB' 'Slab: 633180 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392148 kB' 'KernelStack: 12960 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14824224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.033 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.033 06:40:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.035 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.035 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.036 06:40:43 -- setup/common.sh@33 -- # echo 1536 00:03:29.036 06:40:43 -- setup/common.sh@33 -- # return 0 00:03:29.036 06:40:43 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:29.036 06:40:43 -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.036 06:40:43 -- setup/hugepages.sh@27 -- # local node 00:03:29.036 06:40:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.036 06:40:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.036 06:40:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.036 06:40:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.036 06:40:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.036 06:40:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.036 06:40:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.036 06:40:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.036 06:40:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.036 06:40:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.036 06:40:43 -- setup/common.sh@18 -- # local node=0 00:03:29.036 06:40:43 -- setup/common.sh@19 -- # local var val 00:03:29.036 06:40:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.036 06:40:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.036 06:40:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.036 06:40:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.036 06:40:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.036 06:40:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21768004 kB' 'MemUsed: 11061880 kB' 'SwapCached: 0 kB' 'Active: 8579372 kB' 'Inactive: 192856 kB' 'Active(anon): 8148980 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 192856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8523108 kB' 'Mapped: 112840 kB' 'AnonPages: 252232 kB' 'Shmem: 7899860 kB' 'KernelStack: 7400 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117736 kB' 'Slab: 339488 kB' 'SReclaimable: 117736 kB' 'SUnreclaim: 221752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.036 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.036 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@33 -- # echo 0 00:03:29.037 06:40:43 -- setup/common.sh@33 -- # return 0 00:03:29.037 06:40:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.037 06:40:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.037 06:40:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.037 06:40:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:29.037 06:40:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.037 06:40:43 -- setup/common.sh@18 -- # local node=1 00:03:29.037 06:40:43 -- setup/common.sh@19 -- # local var val 00:03:29.037 06:40:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.037 06:40:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.037 06:40:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:29.037 06:40:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:29.037 06:40:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.037 06:40:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 14850684 kB' 'MemUsed: 12861160 kB' 'SwapCached: 0 kB' 'Active: 5768544 kB' 'Inactive: 4287796 kB' 'Active(anon): 5563804 kB' 'Inactive(anon): 0 kB' 'Active(file): 204740 kB' 'Inactive(file): 4287796 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9830360 kB' 'Mapped: 103308 kB' 'AnonPages: 226148 kB' 'Shmem: 5337824 kB' 'KernelStack: 5560 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 123296 kB' 'Slab: 293692 kB' 'SReclaimable: 123296 kB' 'SUnreclaim: 170396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.037 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.037 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # continue 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.038 06:40:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.038 06:40:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.038 06:40:43 -- setup/common.sh@33 -- # echo 0 00:03:29.038 06:40:43 -- setup/common.sh@33 -- # return 0 00:03:29.038 06:40:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.038 06:40:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.038 06:40:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.038 06:40:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.038 06:40:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:29.038 node0=512 expecting 512 00:03:29.038 06:40:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.038 06:40:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.038 06:40:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.038 06:40:43 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:29.038 node1=1024 expecting 1024 00:03:29.038 06:40:43 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:29.038 00:03:29.038 real 0m1.510s 00:03:29.038 user 0m0.622s 00:03:29.038 sys 0m0.855s 00:03:29.038 06:40:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.038 06:40:43 -- common/autotest_common.sh@10 -- # set +x 00:03:29.038 ************************************ 00:03:29.038 END TEST custom_alloc 00:03:29.038 ************************************ 00:03:29.038 06:40:43 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:29.038 06:40:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.038 06:40:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.038 06:40:43 -- common/autotest_common.sh@10 -- # set +x 00:03:29.038 ************************************ 00:03:29.038 START TEST no_shrink_alloc 00:03:29.038 ************************************ 00:03:29.038 06:40:43 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:29.038 06:40:43 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:29.038 06:40:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:29.038 06:40:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:29.038 06:40:43 -- setup/hugepages.sh@51 -- # shift 00:03:29.038 06:40:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:29.038 06:40:43 -- setup/hugepages.sh@52 -- # local node_ids 00:03:29.038 06:40:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.038 06:40:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:29.038 06:40:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:29.038 06:40:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:29.038 06:40:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.038 06:40:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.038 06:40:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.038 06:40:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.038 06:40:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.038 06:40:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:29.038 06:40:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:29.038 06:40:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:29.038 06:40:43 -- setup/hugepages.sh@73 -- # return 0 00:03:29.038 06:40:43 -- setup/hugepages.sh@198 -- # setup output 00:03:29.038 06:40:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.038 06:40:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.443 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.443 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.443 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.443 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.443 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.443 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.443 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.443 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.443 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.443 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.443 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.443 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.443 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.443 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.443 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.443 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.443 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.706 06:40:44 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:30.706 06:40:44 -- setup/hugepages.sh@89 -- # local node 00:03:30.706 06:40:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.706 06:40:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.706 06:40:44 -- setup/hugepages.sh@92 -- # local surp 00:03:30.706 06:40:44 -- setup/hugepages.sh@93 -- # local resv 00:03:30.706 06:40:44 -- setup/hugepages.sh@94 -- # local anon 00:03:30.706 06:40:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.706 06:40:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.706 06:40:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.706 06:40:44 -- setup/common.sh@18 -- # local node= 00:03:30.706 06:40:44 -- setup/common.sh@19 -- # local var val 00:03:30.706 06:40:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.706 06:40:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.706 06:40:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.706 06:40:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.706 06:40:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.706 06:40:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.706 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.706 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.706 06:40:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37619400 kB' 'MemAvailable: 42362076 kB' 'Buffers: 2696 kB' 'Cached: 18350816 kB' 'SwapCached: 0 kB' 'Active: 14348724 kB' 'Inactive: 4480652 kB' 'Active(anon): 13713592 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478668 kB' 'Mapped: 216168 kB' 'Shmem: 13237728 kB' 'KReclaimable: 241032 kB' 'Slab: 633100 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392068 kB' 'KernelStack: 12928 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14824044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:30.706 06:40:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.706 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.706 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.706 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.707 06:40:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.707 06:40:44 -- setup/common.sh@33 -- # echo 0 00:03:30.707 06:40:44 -- setup/common.sh@33 -- # return 0 00:03:30.707 06:40:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.707 06:40:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.707 06:40:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.707 06:40:44 -- setup/common.sh@18 -- # local node= 00:03:30.707 06:40:44 -- setup/common.sh@19 -- # local var val 00:03:30.707 06:40:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.707 06:40:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.707 06:40:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.707 06:40:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.707 06:40:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.707 06:40:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.707 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37621396 kB' 'MemAvailable: 42364072 kB' 'Buffers: 2696 kB' 'Cached: 18350816 kB' 'SwapCached: 0 kB' 'Active: 14349168 kB' 'Inactive: 4480652 kB' 'Active(anon): 13714036 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479092 kB' 'Mapped: 216168 kB' 'Shmem: 13237728 kB' 'KReclaimable: 241032 kB' 'Slab: 633100 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392068 kB' 'KernelStack: 12912 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14824424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.708 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.708 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.709 06:40:44 -- setup/common.sh@33 -- # echo 0 00:03:30.709 06:40:44 -- setup/common.sh@33 -- # return 0 00:03:30.709 06:40:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.709 06:40:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.709 06:40:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.709 06:40:44 -- setup/common.sh@18 -- # local node= 00:03:30.709 06:40:44 -- setup/common.sh@19 -- # local var val 00:03:30.709 06:40:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.709 06:40:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.709 06:40:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.709 06:40:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.709 06:40:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.709 06:40:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37621460 kB' 'MemAvailable: 42364136 kB' 'Buffers: 2696 kB' 'Cached: 18350816 kB' 'SwapCached: 0 kB' 'Active: 14350012 kB' 'Inactive: 4480652 kB' 'Active(anon): 13714880 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479948 kB' 'Mapped: 216592 kB' 'Shmem: 13237728 kB' 'KReclaimable: 241032 kB' 'Slab: 633100 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392068 kB' 'KernelStack: 12976 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14826588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.709 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.709 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.710 06:40:44 -- setup/common.sh@33 -- # echo 0 00:03:30.710 06:40:44 -- setup/common.sh@33 -- # return 0 00:03:30.710 06:40:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.710 06:40:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.710 nr_hugepages=1024 00:03:30.710 06:40:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.710 resv_hugepages=0 00:03:30.710 06:40:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.710 surplus_hugepages=0 00:03:30.710 06:40:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.710 anon_hugepages=0 00:03:30.710 06:40:44 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.710 06:40:44 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.710 06:40:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.710 06:40:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.710 06:40:44 -- setup/common.sh@18 -- # local node= 00:03:30.710 06:40:44 -- setup/common.sh@19 -- # local var val 00:03:30.710 06:40:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.710 06:40:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.710 06:40:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.710 06:40:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.710 06:40:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.710 06:40:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37614480 kB' 'MemAvailable: 42357156 kB' 'Buffers: 2696 kB' 'Cached: 18350832 kB' 'SwapCached: 0 kB' 'Active: 14353236 kB' 'Inactive: 4480652 kB' 'Active(anon): 13718104 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483576 kB' 'Mapped: 216592 kB' 'Shmem: 13237744 kB' 'KReclaimable: 241032 kB' 'Slab: 633172 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392140 kB' 'KernelStack: 12960 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14830576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198924 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.710 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.710 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.711 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.711 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.712 06:40:44 -- setup/common.sh@33 -- # echo 1024 00:03:30.712 06:40:44 -- setup/common.sh@33 -- # return 0 00:03:30.712 06:40:44 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.712 06:40:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.712 06:40:44 -- setup/hugepages.sh@27 -- # local node 00:03:30.712 06:40:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.712 06:40:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.712 06:40:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.712 06:40:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.712 06:40:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.712 06:40:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.712 06:40:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.712 06:40:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.712 06:40:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.712 06:40:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.712 06:40:44 -- setup/common.sh@18 -- # local node=0 00:03:30.712 06:40:44 -- setup/common.sh@19 -- # local var val 00:03:30.712 06:40:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.712 06:40:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.712 06:40:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.712 06:40:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.712 06:40:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.712 06:40:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20737556 kB' 'MemUsed: 12092328 kB' 'SwapCached: 0 kB' 'Active: 8579536 kB' 'Inactive: 192856 kB' 'Active(anon): 8149144 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 192856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8523116 kB' 'Mapped: 113000 kB' 'AnonPages: 252400 kB' 'Shmem: 7899868 kB' 'KernelStack: 7400 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117736 kB' 'Slab: 339448 kB' 'SReclaimable: 117736 kB' 'SUnreclaim: 221712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.712 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.712 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # continue 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.713 06:40:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.713 06:40:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.713 06:40:44 -- setup/common.sh@33 -- # echo 0 00:03:30.713 06:40:44 -- setup/common.sh@33 -- # return 0 00:03:30.713 06:40:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.713 06:40:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.713 06:40:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.713 06:40:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.713 06:40:44 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.713 node0=1024 expecting 1024 00:03:30.713 06:40:44 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.713 06:40:44 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:30.713 06:40:44 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:30.713 06:40:44 -- setup/hugepages.sh@202 -- # setup output 00:03:30.713 06:40:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.713 06:40:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.089 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:32.089 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.089 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:32.089 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:32.089 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:32.089 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:32.089 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:32.089 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:32.089 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:32.089 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:32.089 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:32.089 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:32.089 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:32.089 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:32.089 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:32.089 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:32.089 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:32.352 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:32.352 06:40:46 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:32.352 06:40:46 -- setup/hugepages.sh@89 -- # local node 00:03:32.352 06:40:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.352 06:40:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.352 06:40:46 -- setup/hugepages.sh@92 -- # local surp 00:03:32.352 06:40:46 -- setup/hugepages.sh@93 -- # local resv 00:03:32.352 06:40:46 -- setup/hugepages.sh@94 -- # local anon 00:03:32.352 06:40:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.352 06:40:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.352 06:40:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.352 06:40:46 -- setup/common.sh@18 -- # local node= 00:03:32.352 06:40:46 -- setup/common.sh@19 -- # local var val 00:03:32.352 06:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.352 06:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.352 06:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.352 06:40:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.352 06:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.352 06:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37587324 kB' 'MemAvailable: 42330000 kB' 'Buffers: 2696 kB' 'Cached: 18350892 kB' 'SwapCached: 0 kB' 'Active: 14348564 kB' 'Inactive: 4480652 kB' 'Active(anon): 13713432 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478836 kB' 'Mapped: 216172 kB' 'Shmem: 13237804 kB' 'KReclaimable: 241032 kB' 'Slab: 633192 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392160 kB' 'KernelStack: 13024 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14824620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199132 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.352 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.352 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.353 06:40:46 -- setup/common.sh@33 -- # echo 0 00:03:32.353 06:40:46 -- setup/common.sh@33 -- # return 0 00:03:32.353 06:40:46 -- setup/hugepages.sh@97 -- # anon=0 00:03:32.353 06:40:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.353 06:40:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.353 06:40:46 -- setup/common.sh@18 -- # local node= 00:03:32.353 06:40:46 -- setup/common.sh@19 -- # local var val 00:03:32.353 06:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.353 06:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.353 06:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.353 06:40:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.353 06:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.353 06:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37590384 kB' 'MemAvailable: 42333060 kB' 'Buffers: 2696 kB' 'Cached: 18350896 kB' 'SwapCached: 0 kB' 'Active: 14348404 kB' 'Inactive: 4480652 kB' 'Active(anon): 13713272 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478660 kB' 'Mapped: 216172 kB' 'Shmem: 13237808 kB' 'KReclaimable: 241032 kB' 'Slab: 633152 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392120 kB' 'KernelStack: 12960 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14824632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199100 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.353 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.353 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.354 06:40:46 -- setup/common.sh@33 -- # echo 0 00:03:32.354 06:40:46 -- setup/common.sh@33 -- # return 0 00:03:32.354 06:40:46 -- setup/hugepages.sh@99 -- # surp=0 00:03:32.354 06:40:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.354 06:40:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.354 06:40:46 -- setup/common.sh@18 -- # local node= 00:03:32.354 06:40:46 -- setup/common.sh@19 -- # local var val 00:03:32.354 06:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.354 06:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.354 06:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.354 06:40:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.354 06:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.354 06:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37590612 kB' 'MemAvailable: 42333288 kB' 'Buffers: 2696 kB' 'Cached: 18350900 kB' 'SwapCached: 0 kB' 'Active: 14348148 kB' 'Inactive: 4480652 kB' 'Active(anon): 13713016 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478368 kB' 'Mapped: 216168 kB' 'Shmem: 13237812 kB' 'KReclaimable: 241032 kB' 'Slab: 633248 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392216 kB' 'KernelStack: 13024 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14824648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199100 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.354 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.354 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.355 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.355 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.356 06:40:46 -- setup/common.sh@33 -- # echo 0 00:03:32.356 06:40:46 -- setup/common.sh@33 -- # return 0 00:03:32.356 06:40:46 -- setup/hugepages.sh@100 -- # resv=0 00:03:32.356 06:40:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.356 nr_hugepages=1024 00:03:32.356 06:40:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.356 resv_hugepages=0 00:03:32.356 06:40:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.356 surplus_hugepages=0 00:03:32.356 06:40:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.356 anon_hugepages=0 00:03:32.356 06:40:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.356 06:40:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.356 06:40:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.356 06:40:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.356 06:40:46 -- setup/common.sh@18 -- # local node= 00:03:32.356 06:40:46 -- setup/common.sh@19 -- # local var val 00:03:32.356 06:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.356 06:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.356 06:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.356 06:40:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.356 06:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.356 06:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37590868 kB' 'MemAvailable: 42333544 kB' 'Buffers: 2696 kB' 'Cached: 18350920 kB' 'SwapCached: 0 kB' 'Active: 14348440 kB' 'Inactive: 4480652 kB' 'Active(anon): 13713308 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4480652 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478628 kB' 'Mapped: 216168 kB' 'Shmem: 13237832 kB' 'KReclaimable: 241032 kB' 'Slab: 633248 kB' 'SReclaimable: 241032 kB' 'SUnreclaim: 392216 kB' 'KernelStack: 13024 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14824664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199100 kB' 'VmallocChunk: 0 kB' 'Percpu: 39552 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2801244 kB' 'DirectMap2M: 19138560 kB' 'DirectMap1G: 47185920 kB' 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.356 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.356 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.357 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.357 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.358 06:40:46 -- setup/common.sh@33 -- # echo 1024 00:03:32.358 06:40:46 -- setup/common.sh@33 -- # return 0 00:03:32.358 06:40:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.358 06:40:46 -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.358 06:40:46 -- setup/hugepages.sh@27 -- # local node 00:03:32.358 06:40:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.358 06:40:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.358 06:40:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.358 06:40:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.358 06:40:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.358 06:40:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.358 06:40:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.358 06:40:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.358 06:40:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.358 06:40:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.358 06:40:46 -- setup/common.sh@18 -- # local node=0 00:03:32.358 06:40:46 -- setup/common.sh@19 -- # local var val 00:03:32.358 06:40:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.358 06:40:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.358 06:40:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.358 06:40:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.358 06:40:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.358 06:40:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20711420 kB' 'MemUsed: 12118464 kB' 'SwapCached: 0 kB' 'Active: 8580108 kB' 'Inactive: 192856 kB' 'Active(anon): 8149716 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 192856 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8523124 kB' 'Mapped: 112860 kB' 'AnonPages: 252944 kB' 'Shmem: 7899876 kB' 'KernelStack: 7432 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 117736 kB' 'Slab: 339468 kB' 'SReclaimable: 117736 kB' 'SUnreclaim: 221732 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.358 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.358 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # continue 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.359 06:40:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.359 06:40:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.359 06:40:46 -- setup/common.sh@33 -- # echo 0 00:03:32.359 06:40:46 -- setup/common.sh@33 -- # return 0 00:03:32.359 06:40:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.359 06:40:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.359 06:40:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.359 06:40:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.359 06:40:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.359 node0=1024 expecting 1024 00:03:32.359 06:40:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.359 00:03:32.359 real 0m3.375s 00:03:32.359 user 0m1.395s 00:03:32.359 sys 0m1.919s 00:03:32.359 06:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.359 06:40:46 -- common/autotest_common.sh@10 -- # set +x 00:03:32.359 ************************************ 00:03:32.359 END TEST no_shrink_alloc 00:03:32.359 ************************************ 00:03:32.359 06:40:46 -- setup/hugepages.sh@217 -- # clear_hp 00:03:32.359 06:40:46 -- setup/hugepages.sh@37 -- # local node hp 00:03:32.359 06:40:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.359 06:40:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.359 06:40:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.359 06:40:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.359 06:40:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.359 06:40:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.359 06:40:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.359 06:40:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.359 06:40:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.359 06:40:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.359 06:40:46 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.359 06:40:46 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.359 00:03:32.359 real 0m12.721s 00:03:32.359 user 0m4.882s 00:03:32.360 sys 0m6.683s 00:03:32.360 06:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.360 06:40:46 -- common/autotest_common.sh@10 -- # set +x 00:03:32.360 ************************************ 00:03:32.360 END TEST hugepages 00:03:32.360 ************************************ 00:03:32.618 06:40:46 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:32.618 06:40:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:32.618 06:40:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:32.618 06:40:46 -- common/autotest_common.sh@10 -- # set +x 00:03:32.618 ************************************ 00:03:32.618 START TEST driver 00:03:32.618 ************************************ 00:03:32.618 06:40:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:32.618 * Looking for test storage... 00:03:32.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:32.618 06:40:46 -- setup/driver.sh@68 -- # setup reset 00:03:32.618 06:40:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:32.618 06:40:46 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.148 06:40:49 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:35.148 06:40:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:35.148 06:40:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.148 06:40:49 -- common/autotest_common.sh@10 -- # set +x 00:03:35.148 ************************************ 00:03:35.148 START TEST guess_driver 00:03:35.148 ************************************ 00:03:35.148 06:40:49 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:35.148 06:40:49 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:35.148 06:40:49 -- setup/driver.sh@47 -- # local fail=0 00:03:35.148 06:40:49 -- setup/driver.sh@49 -- # pick_driver 00:03:35.148 06:40:49 -- setup/driver.sh@36 -- # vfio 00:03:35.148 06:40:49 -- setup/driver.sh@21 -- # local iommu_grups 00:03:35.148 06:40:49 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:35.148 06:40:49 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:35.148 06:40:49 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:35.148 06:40:49 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:35.148 06:40:49 -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:03:35.148 06:40:49 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:35.148 06:40:49 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:35.148 06:40:49 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:35.148 06:40:49 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:35.148 06:40:49 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:35.148 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:35.148 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:35.148 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:35.148 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:35.148 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:35.148 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:35.148 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:35.148 06:40:49 -- setup/driver.sh@30 -- # return 0 00:03:35.148 06:40:49 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:35.148 06:40:49 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:35.148 06:40:49 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:35.148 06:40:49 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:35.148 Looking for driver=vfio-pci 00:03:35.148 06:40:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.148 06:40:49 -- setup/driver.sh@45 -- # setup output config 00:03:35.148 06:40:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.148 06:40:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.524 06:40:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:36.524 06:40:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:36.524 06:40:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.458 06:40:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.458 06:40:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.458 06:40:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.716 06:40:51 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:37.716 06:40:51 -- setup/driver.sh@65 -- # setup reset 00:03:37.716 06:40:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:37.716 06:40:51 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.241 00:03:40.241 real 0m5.136s 00:03:40.241 user 0m1.264s 00:03:40.241 sys 0m2.043s 00:03:40.241 06:40:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.241 06:40:54 -- common/autotest_common.sh@10 -- # set +x 00:03:40.241 ************************************ 00:03:40.241 END TEST guess_driver 00:03:40.241 ************************************ 00:03:40.241 00:03:40.241 real 0m7.741s 00:03:40.241 user 0m1.862s 00:03:40.241 sys 0m3.203s 00:03:40.241 06:40:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.242 06:40:54 -- common/autotest_common.sh@10 -- # set +x 00:03:40.242 ************************************ 00:03:40.242 END TEST driver 00:03:40.242 ************************************ 00:03:40.242 06:40:54 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:40.242 06:40:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.242 06:40:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.242 06:40:54 -- common/autotest_common.sh@10 -- # set +x 00:03:40.242 ************************************ 00:03:40.242 START TEST devices 00:03:40.242 ************************************ 00:03:40.242 06:40:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:40.242 * Looking for test storage... 00:03:40.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.242 06:40:54 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:40.242 06:40:54 -- setup/devices.sh@192 -- # setup reset 00:03:40.242 06:40:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.242 06:40:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.142 06:40:55 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:42.142 06:40:55 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:42.142 06:40:55 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:42.142 06:40:55 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:42.142 06:40:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:42.142 06:40:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:42.142 06:40:55 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:42.142 06:40:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:42.142 06:40:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:42.142 06:40:55 -- setup/devices.sh@196 -- # blocks=() 00:03:42.142 06:40:55 -- setup/devices.sh@196 -- # declare -a blocks 00:03:42.142 06:40:55 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:42.142 06:40:55 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:42.142 06:40:55 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:42.142 06:40:55 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:42.142 06:40:55 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:42.142 06:40:55 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:42.142 06:40:55 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:42.142 06:40:55 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:42.142 06:40:55 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:42.142 06:40:55 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:42.142 06:40:55 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:42.142 No valid GPT data, bailing 00:03:42.142 06:40:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:42.142 06:40:56 -- scripts/common.sh@393 -- # pt= 00:03:42.142 06:40:56 -- scripts/common.sh@394 -- # return 1 00:03:42.142 06:40:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:42.142 06:40:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:42.142 06:40:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:42.143 06:40:56 -- setup/common.sh@80 -- # echo 1000204886016 00:03:42.143 06:40:56 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:42.143 06:40:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:42.143 06:40:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:42.143 06:40:56 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:42.143 06:40:56 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:42.143 06:40:56 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:42.143 06:40:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.143 06:40:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.143 06:40:56 -- common/autotest_common.sh@10 -- # set +x 00:03:42.143 ************************************ 00:03:42.143 START TEST nvme_mount 00:03:42.143 ************************************ 00:03:42.143 06:40:56 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:42.143 06:40:56 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:42.143 06:40:56 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:42.143 06:40:56 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.143 06:40:56 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.143 06:40:56 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:42.143 06:40:56 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:42.143 06:40:56 -- setup/common.sh@40 -- # local part_no=1 00:03:42.143 06:40:56 -- setup/common.sh@41 -- # local size=1073741824 00:03:42.143 06:40:56 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:42.143 06:40:56 -- setup/common.sh@44 -- # parts=() 00:03:42.143 06:40:56 -- setup/common.sh@44 -- # local parts 00:03:42.143 06:40:56 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:42.143 06:40:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:42.143 06:40:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:42.143 06:40:56 -- setup/common.sh@46 -- # (( part++ )) 00:03:42.143 06:40:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:42.143 06:40:56 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:42.143 06:40:56 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:42.143 06:40:56 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:43.076 Creating new GPT entries in memory. 00:03:43.076 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:43.076 other utilities. 00:03:43.076 06:40:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:43.076 06:40:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:43.076 06:40:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:43.076 06:40:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:43.076 06:40:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:44.011 Creating new GPT entries in memory. 00:03:44.011 The operation has completed successfully. 00:03:44.011 06:40:58 -- setup/common.sh@57 -- # (( part++ )) 00:03:44.011 06:40:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.011 06:40:58 -- setup/common.sh@62 -- # wait 361347 00:03:44.011 06:40:58 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.011 06:40:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:44.011 06:40:58 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.011 06:40:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:44.011 06:40:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:44.011 06:40:58 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.011 06:40:58 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.011 06:40:58 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:44.011 06:40:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:44.011 06:40:58 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.011 06:40:58 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.011 06:40:58 -- setup/devices.sh@53 -- # local found=0 00:03:44.011 06:40:58 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:44.011 06:40:58 -- setup/devices.sh@56 -- # : 00:03:44.011 06:40:58 -- setup/devices.sh@59 -- # local pci status 00:03:44.011 06:40:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.011 06:40:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:44.011 06:40:58 -- setup/devices.sh@47 -- # setup output config 00:03:44.011 06:40:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.011 06:40:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:45.387 06:40:59 -- setup/devices.sh@63 -- # found=1 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.387 06:40:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.387 06:40:59 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:45.387 06:40:59 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.387 06:40:59 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.387 06:40:59 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.387 06:40:59 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:45.387 06:40:59 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.387 06:40:59 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.387 06:40:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.387 06:40:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:45.645 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:45.645 06:40:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.645 06:40:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:45.903 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:45.903 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:45.903 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:45.903 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:45.903 06:40:59 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:45.903 06:40:59 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:45.903 06:40:59 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.903 06:40:59 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:45.903 06:40:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:45.903 06:40:59 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.903 06:40:59 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.903 06:40:59 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:45.903 06:40:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:45.903 06:40:59 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.903 06:40:59 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:45.903 06:40:59 -- setup/devices.sh@53 -- # local found=0 00:03:45.903 06:40:59 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:45.903 06:40:59 -- setup/devices.sh@56 -- # : 00:03:45.903 06:40:59 -- setup/devices.sh@59 -- # local pci status 00:03:45.903 06:40:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.903 06:40:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:45.903 06:40:59 -- setup/devices.sh@47 -- # setup output config 00:03:45.903 06:40:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.903 06:40:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:47.276 06:41:01 -- setup/devices.sh@63 -- # found=1 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.276 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.276 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.277 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.277 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 06:41:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:47.277 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 06:41:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:47.277 06:41:01 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:47.277 06:41:01 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.277 06:41:01 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:47.277 06:41:01 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:47.277 06:41:01 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:47.277 06:41:01 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:47.277 06:41:01 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:47.277 06:41:01 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:47.277 06:41:01 -- setup/devices.sh@50 -- # local mount_point= 00:03:47.277 06:41:01 -- setup/devices.sh@51 -- # local test_file= 00:03:47.277 06:41:01 -- setup/devices.sh@53 -- # local found=0 00:03:47.277 06:41:01 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:47.277 06:41:01 -- setup/devices.sh@59 -- # local pci status 00:03:47.277 06:41:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:47.277 06:41:01 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:47.277 06:41:01 -- setup/devices.sh@47 -- # setup output config 00:03:47.277 06:41:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.277 06:41:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:48.684 06:41:02 -- setup/devices.sh@63 -- # found=1 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.684 06:41:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:48.684 06:41:02 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:48.684 06:41:02 -- setup/devices.sh@68 -- # return 0 00:03:48.684 06:41:02 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:48.684 06:41:02 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:48.684 06:41:02 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:48.684 06:41:02 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:48.943 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:48.943 00:03:48.943 real 0m6.860s 00:03:48.943 user 0m1.704s 00:03:48.943 sys 0m2.772s 00:03:48.943 06:41:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.943 06:41:02 -- common/autotest_common.sh@10 -- # set +x 00:03:48.943 ************************************ 00:03:48.943 END TEST nvme_mount 00:03:48.943 ************************************ 00:03:48.943 06:41:02 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:48.943 06:41:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.943 06:41:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.943 06:41:02 -- common/autotest_common.sh@10 -- # set +x 00:03:48.943 ************************************ 00:03:48.943 START TEST dm_mount 00:03:48.943 ************************************ 00:03:48.943 06:41:02 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:48.943 06:41:02 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:48.943 06:41:02 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:48.943 06:41:02 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:48.943 06:41:02 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:48.943 06:41:02 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:48.943 06:41:02 -- setup/common.sh@40 -- # local part_no=2 00:03:48.943 06:41:02 -- setup/common.sh@41 -- # local size=1073741824 00:03:48.943 06:41:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:48.943 06:41:02 -- setup/common.sh@44 -- # parts=() 00:03:48.943 06:41:02 -- setup/common.sh@44 -- # local parts 00:03:48.943 06:41:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:48.943 06:41:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:48.943 06:41:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:48.943 06:41:02 -- setup/common.sh@46 -- # (( part++ )) 00:03:48.943 06:41:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:48.943 06:41:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:48.943 06:41:02 -- setup/common.sh@46 -- # (( part++ )) 00:03:48.943 06:41:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:48.943 06:41:02 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:48.943 06:41:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:48.943 06:41:02 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:49.881 Creating new GPT entries in memory. 00:03:49.881 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:49.881 other utilities. 00:03:49.881 06:41:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:49.881 06:41:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:49.881 06:41:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:49.881 06:41:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:49.881 06:41:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:50.817 Creating new GPT entries in memory. 00:03:50.817 The operation has completed successfully. 00:03:50.817 06:41:04 -- setup/common.sh@57 -- # (( part++ )) 00:03:50.817 06:41:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.817 06:41:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:50.817 06:41:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:50.818 06:41:04 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:52.197 The operation has completed successfully. 00:03:52.197 06:41:05 -- setup/common.sh@57 -- # (( part++ )) 00:03:52.197 06:41:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.197 06:41:05 -- setup/common.sh@62 -- # wait 364107 00:03:52.197 06:41:06 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:52.197 06:41:06 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.197 06:41:06 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.197 06:41:06 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:52.197 06:41:06 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:52.197 06:41:06 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.197 06:41:06 -- setup/devices.sh@161 -- # break 00:03:52.197 06:41:06 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.197 06:41:06 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:52.197 06:41:06 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:52.197 06:41:06 -- setup/devices.sh@166 -- # dm=dm-0 00:03:52.197 06:41:06 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:52.197 06:41:06 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:52.197 06:41:06 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.197 06:41:06 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:52.197 06:41:06 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.197 06:41:06 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:52.197 06:41:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:52.197 06:41:06 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.197 06:41:06 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.197 06:41:06 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:52.197 06:41:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:52.197 06:41:06 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:52.197 06:41:06 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:52.197 06:41:06 -- setup/devices.sh@53 -- # local found=0 00:03:52.197 06:41:06 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:52.197 06:41:06 -- setup/devices.sh@56 -- # : 00:03:52.197 06:41:06 -- setup/devices.sh@59 -- # local pci status 00:03:52.197 06:41:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.197 06:41:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:52.197 06:41:06 -- setup/devices.sh@47 -- # setup output config 00:03:52.197 06:41:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.197 06:41:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:53.132 06:41:07 -- setup/devices.sh@63 -- # found=1 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.132 06:41:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:53.132 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.391 06:41:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.391 06:41:07 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:53.391 06:41:07 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.391 06:41:07 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:53.391 06:41:07 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:53.391 06:41:07 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.391 06:41:07 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:53.391 06:41:07 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:53.391 06:41:07 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:53.391 06:41:07 -- setup/devices.sh@50 -- # local mount_point= 00:03:53.391 06:41:07 -- setup/devices.sh@51 -- # local test_file= 00:03:53.391 06:41:07 -- setup/devices.sh@53 -- # local found=0 00:03:53.391 06:41:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:53.391 06:41:07 -- setup/devices.sh@59 -- # local pci status 00:03:53.391 06:41:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.391 06:41:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:53.391 06:41:07 -- setup/devices.sh@47 -- # setup output config 00:03:53.391 06:41:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.391 06:41:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:54.768 06:41:08 -- setup/devices.sh@63 -- # found=1 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.768 06:41:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.768 06:41:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:54.768 06:41:08 -- setup/devices.sh@68 -- # return 0 00:03:54.768 06:41:08 -- setup/devices.sh@187 -- # cleanup_dm 00:03:54.768 06:41:08 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:54.768 06:41:08 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:54.768 06:41:08 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:54.768 06:41:08 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:54.768 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.768 06:41:08 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:54.768 00:03:54.768 real 0m5.969s 00:03:54.768 user 0m1.094s 00:03:54.768 sys 0m1.782s 00:03:54.768 06:41:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.768 06:41:08 -- common/autotest_common.sh@10 -- # set +x 00:03:54.768 ************************************ 00:03:54.768 END TEST dm_mount 00:03:54.768 ************************************ 00:03:54.768 06:41:08 -- setup/devices.sh@1 -- # cleanup 00:03:54.768 06:41:08 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:54.768 06:41:08 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.768 06:41:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:54.768 06:41:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.768 06:41:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.026 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:55.026 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:55.026 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:55.026 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:55.026 06:41:09 -- setup/devices.sh@12 -- # cleanup_dm 00:03:55.026 06:41:09 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:55.026 06:41:09 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:55.026 06:41:09 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.026 06:41:09 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:55.026 06:41:09 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.026 06:41:09 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:55.026 00:03:55.026 real 0m14.838s 00:03:55.026 user 0m3.498s 00:03:55.026 sys 0m5.635s 00:03:55.026 06:41:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.026 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:03:55.026 ************************************ 00:03:55.026 END TEST devices 00:03:55.026 ************************************ 00:03:55.026 00:03:55.026 real 0m46.833s 00:03:55.026 user 0m13.951s 00:03:55.026 sys 0m21.560s 00:03:55.026 06:41:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.026 06:41:09 -- common/autotest_common.sh@10 -- # set +x 00:03:55.026 ************************************ 00:03:55.026 END TEST setup.sh 00:03:55.026 ************************************ 00:03:55.026 06:41:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:56.400 Hugepages 00:03:56.400 node hugesize free / total 00:03:56.400 node0 1048576kB 0 / 0 00:03:56.400 node0 2048kB 2048 / 2048 00:03:56.400 node1 1048576kB 0 / 0 00:03:56.400 node1 2048kB 0 / 0 00:03:56.400 00:03:56.400 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:56.400 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:56.400 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:56.400 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:56.400 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:56.400 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:56.400 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:56.400 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:56.400 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:56.400 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:56.400 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:56.400 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:56.400 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:56.400 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:56.400 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:56.400 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:56.400 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:56.400 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:56.400 06:41:10 -- spdk/autotest.sh@141 -- # uname -s 00:03:56.400 06:41:10 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:56.400 06:41:10 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:56.400 06:41:10 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.775 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:57.775 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:57.775 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:57.775 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:57.775 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:57.775 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:57.775 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:57.775 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:57.775 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:57.775 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:57.775 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:57.775 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:57.775 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:57.775 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:57.775 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:57.775 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:58.713 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:58.713 06:41:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:00.090 06:41:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:00.090 06:41:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:00.090 06:41:13 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:00.090 06:41:13 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:00.090 06:41:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:00.090 06:41:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:00.090 06:41:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.090 06:41:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:00.090 06:41:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:00.090 06:41:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:00.090 06:41:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:00.090 06:41:13 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.023 Waiting for block devices as requested 00:04:01.023 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:01.282 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:01.282 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:01.282 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:01.282 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:01.540 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:01.540 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:01.540 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:01.540 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:01.540 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:01.798 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:01.798 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:01.798 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:02.057 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:02.057 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:02.057 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:02.057 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:02.315 06:41:16 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:02.315 06:41:16 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:02.315 06:41:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:02.315 06:41:16 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:04:02.315 06:41:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:02.315 06:41:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:02.315 06:41:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:02.315 06:41:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:02.315 06:41:16 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:02.315 06:41:16 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:02.315 06:41:16 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:02.315 06:41:16 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:02.315 06:41:16 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:02.315 06:41:16 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:04:02.315 06:41:16 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:02.315 06:41:16 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:02.315 06:41:16 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:02.315 06:41:16 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:02.315 06:41:16 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:02.315 06:41:16 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:02.315 06:41:16 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:02.315 06:41:16 -- common/autotest_common.sh@1542 -- # continue 00:04:02.315 06:41:16 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:02.315 06:41:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:02.315 06:41:16 -- common/autotest_common.sh@10 -- # set +x 00:04:02.315 06:41:16 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:02.315 06:41:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:02.315 06:41:16 -- common/autotest_common.sh@10 -- # set +x 00:04:02.315 06:41:16 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.722 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:03.722 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:03.722 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:03.722 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:03.722 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:03.722 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:03.722 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:03.722 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:03.722 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:03.722 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:03.722 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:03.722 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:03.722 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:03.722 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:03.722 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:03.722 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:04.658 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:04.658 06:41:18 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:04.916 06:41:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:04.916 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:04:04.916 06:41:18 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:04.916 06:41:18 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:04.916 06:41:18 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:04.916 06:41:18 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:04.916 06:41:18 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:04.916 06:41:18 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:04.916 06:41:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:04.916 06:41:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:04.916 06:41:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.916 06:41:18 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:04.916 06:41:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:04.916 06:41:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:04.916 06:41:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:04:04.916 06:41:18 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:04.916 06:41:18 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:04.916 06:41:18 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:04:04.916 06:41:18 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:04.916 06:41:18 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:04:04.916 06:41:18 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:04:04.916 06:41:18 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:04:04.916 06:41:18 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=370118 00:04:04.916 06:41:18 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:04.916 06:41:18 -- common/autotest_common.sh@1583 -- # waitforlisten 370118 00:04:04.916 06:41:18 -- common/autotest_common.sh@819 -- # '[' -z 370118 ']' 00:04:04.916 06:41:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.916 06:41:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:04.916 06:41:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.916 06:41:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:04.916 06:41:18 -- common/autotest_common.sh@10 -- # set +x 00:04:04.916 [2024-05-15 06:41:19.034070] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:04.916 [2024-05-15 06:41:19.034168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid370118 ] 00:04:04.916 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.916 [2024-05-15 06:41:19.101682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.175 [2024-05-15 06:41:19.210476] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:05.175 [2024-05-15 06:41:19.210642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.741 06:41:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:05.741 06:41:19 -- common/autotest_common.sh@852 -- # return 0 00:04:05.741 06:41:19 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:05.741 06:41:19 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:05.742 06:41:19 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:09.024 nvme0n1 00:04:09.024 06:41:23 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:09.282 [2024-05-15 06:41:23.276944] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:09.282 [2024-05-15 06:41:23.276996] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:09.282 request: 00:04:09.282 { 00:04:09.282 "nvme_ctrlr_name": "nvme0", 00:04:09.282 "password": "test", 00:04:09.282 "method": "bdev_nvme_opal_revert", 00:04:09.282 "req_id": 1 00:04:09.282 } 00:04:09.282 Got JSON-RPC error response 00:04:09.282 response: 00:04:09.282 { 00:04:09.282 "code": -32603, 00:04:09.282 "message": "Internal error" 00:04:09.282 } 00:04:09.282 06:41:23 -- common/autotest_common.sh@1589 -- # true 00:04:09.282 06:41:23 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:09.282 06:41:23 -- common/autotest_common.sh@1593 -- # killprocess 370118 00:04:09.282 06:41:23 -- common/autotest_common.sh@926 -- # '[' -z 370118 ']' 00:04:09.282 06:41:23 -- common/autotest_common.sh@930 -- # kill -0 370118 00:04:09.282 06:41:23 -- common/autotest_common.sh@931 -- # uname 00:04:09.282 06:41:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:09.282 06:41:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 370118 00:04:09.282 06:41:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:09.282 06:41:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:09.282 06:41:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 370118' 00:04:09.282 killing process with pid 370118 00:04:09.282 06:41:23 -- common/autotest_common.sh@945 -- # kill 370118 00:04:09.282 06:41:23 -- common/autotest_common.sh@950 -- # wait 370118 00:04:11.182 06:41:25 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:11.182 06:41:25 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:11.182 06:41:25 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:11.182 06:41:25 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:11.182 06:41:25 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:11.182 06:41:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:11.182 06:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.182 06:41:25 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:11.182 06:41:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.182 06:41:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.182 06:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.182 ************************************ 00:04:11.182 START TEST env 00:04:11.182 ************************************ 00:04:11.182 06:41:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:11.182 * Looking for test storage... 00:04:11.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:11.182 06:41:25 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:11.182 06:41:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.182 06:41:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.182 06:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.182 ************************************ 00:04:11.182 START TEST env_memory 00:04:11.182 ************************************ 00:04:11.182 06:41:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:11.182 00:04:11.182 00:04:11.182 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.182 http://cunit.sourceforge.net/ 00:04:11.182 00:04:11.182 00:04:11.182 Suite: memory 00:04:11.182 Test: alloc and free memory map ...[2024-05-15 06:41:25.213112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:11.182 passed 00:04:11.182 Test: mem map translation ...[2024-05-15 06:41:25.233829] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:11.182 [2024-05-15 06:41:25.233850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:11.182 [2024-05-15 06:41:25.233906] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:11.182 [2024-05-15 06:41:25.233919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:11.182 passed 00:04:11.182 Test: mem map registration ...[2024-05-15 06:41:25.277249] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:11.182 [2024-05-15 06:41:25.277269] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:11.182 passed 00:04:11.182 Test: mem map adjacent registrations ...passed 00:04:11.182 00:04:11.182 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.182 suites 1 1 n/a 0 0 00:04:11.182 tests 4 4 4 0 0 00:04:11.182 asserts 152 152 152 0 n/a 00:04:11.182 00:04:11.182 Elapsed time = 0.145 seconds 00:04:11.182 00:04:11.182 real 0m0.152s 00:04:11.182 user 0m0.145s 00:04:11.182 sys 0m0.007s 00:04:11.182 06:41:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.182 06:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.182 ************************************ 00:04:11.182 END TEST env_memory 00:04:11.182 ************************************ 00:04:11.182 06:41:25 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:11.182 06:41:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.182 06:41:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.182 06:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:11.182 ************************************ 00:04:11.182 START TEST env_vtophys 00:04:11.182 ************************************ 00:04:11.182 06:41:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:11.182 EAL: lib.eal log level changed from notice to debug 00:04:11.182 EAL: Detected lcore 0 as core 0 on socket 0 00:04:11.182 EAL: Detected lcore 1 as core 1 on socket 0 00:04:11.182 EAL: Detected lcore 2 as core 2 on socket 0 00:04:11.182 EAL: Detected lcore 3 as core 3 on socket 0 00:04:11.182 EAL: Detected lcore 4 as core 4 on socket 0 00:04:11.182 EAL: Detected lcore 5 as core 5 on socket 0 00:04:11.182 EAL: Detected lcore 6 as core 8 on socket 0 00:04:11.182 EAL: Detected lcore 7 as core 9 on socket 0 00:04:11.182 EAL: Detected lcore 8 as core 10 on socket 0 00:04:11.182 EAL: Detected lcore 9 as core 11 on socket 0 00:04:11.182 EAL: Detected lcore 10 as core 12 on socket 0 00:04:11.182 EAL: Detected lcore 11 as core 13 on socket 0 00:04:11.182 EAL: Detected lcore 12 as core 0 on socket 1 00:04:11.182 EAL: Detected lcore 13 as core 1 on socket 1 00:04:11.182 EAL: Detected lcore 14 as core 2 on socket 1 00:04:11.182 EAL: Detected lcore 15 as core 3 on socket 1 00:04:11.182 EAL: Detected lcore 16 as core 4 on socket 1 00:04:11.182 EAL: Detected lcore 17 as core 5 on socket 1 00:04:11.182 EAL: Detected lcore 18 as core 8 on socket 1 00:04:11.182 EAL: Detected lcore 19 as core 9 on socket 1 00:04:11.182 EAL: Detected lcore 20 as core 10 on socket 1 00:04:11.182 EAL: Detected lcore 21 as core 11 on socket 1 00:04:11.182 EAL: Detected lcore 22 as core 12 on socket 1 00:04:11.182 EAL: Detected lcore 23 as core 13 on socket 1 00:04:11.182 EAL: Detected lcore 24 as core 0 on socket 0 00:04:11.182 EAL: Detected lcore 25 as core 1 on socket 0 00:04:11.182 EAL: Detected lcore 26 as core 2 on socket 0 00:04:11.182 EAL: Detected lcore 27 as core 3 on socket 0 00:04:11.182 EAL: Detected lcore 28 as core 4 on socket 0 00:04:11.182 EAL: Detected lcore 29 as core 5 on socket 0 00:04:11.182 EAL: Detected lcore 30 as core 8 on socket 0 00:04:11.182 EAL: Detected lcore 31 as core 9 on socket 0 00:04:11.182 EAL: Detected lcore 32 as core 10 on socket 0 00:04:11.182 EAL: Detected lcore 33 as core 11 on socket 0 00:04:11.182 EAL: Detected lcore 34 as core 12 on socket 0 00:04:11.182 EAL: Detected lcore 35 as core 13 on socket 0 00:04:11.182 EAL: Detected lcore 36 as core 0 on socket 1 00:04:11.182 EAL: Detected lcore 37 as core 1 on socket 1 00:04:11.182 EAL: Detected lcore 38 as core 2 on socket 1 00:04:11.182 EAL: Detected lcore 39 as core 3 on socket 1 00:04:11.182 EAL: Detected lcore 40 as core 4 on socket 1 00:04:11.182 EAL: Detected lcore 41 as core 5 on socket 1 00:04:11.182 EAL: Detected lcore 42 as core 8 on socket 1 00:04:11.182 EAL: Detected lcore 43 as core 9 on socket 1 00:04:11.182 EAL: Detected lcore 44 as core 10 on socket 1 00:04:11.182 EAL: Detected lcore 45 as core 11 on socket 1 00:04:11.182 EAL: Detected lcore 46 as core 12 on socket 1 00:04:11.183 EAL: Detected lcore 47 as core 13 on socket 1 00:04:11.183 EAL: Maximum logical cores by configuration: 128 00:04:11.183 EAL: Detected CPU lcores: 48 00:04:11.183 EAL: Detected NUMA nodes: 2 00:04:11.183 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:11.183 EAL: Detected shared linkage of DPDK 00:04:11.183 EAL: No shared files mode enabled, IPC will be disabled 00:04:11.183 EAL: Bus pci wants IOVA as 'DC' 00:04:11.183 EAL: Buses did not request a specific IOVA mode. 00:04:11.183 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:11.183 EAL: Selected IOVA mode 'VA' 00:04:11.183 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.183 EAL: Probing VFIO support... 00:04:11.183 EAL: IOMMU type 1 (Type 1) is supported 00:04:11.183 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:11.183 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:11.183 EAL: VFIO support initialized 00:04:11.183 EAL: Ask a virtual area of 0x2e000 bytes 00:04:11.183 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:11.183 EAL: Setting up physically contiguous memory... 00:04:11.183 EAL: Setting maximum number of open files to 524288 00:04:11.183 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:11.183 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:11.183 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:11.183 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:11.183 EAL: Ask a virtual area of 0x61000 bytes 00:04:11.183 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:11.183 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:11.183 EAL: Ask a virtual area of 0x400000000 bytes 00:04:11.183 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:11.183 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:11.183 EAL: Hugepages will be freed exactly as allocated. 00:04:11.183 EAL: No shared files mode enabled, IPC is disabled 00:04:11.183 EAL: No shared files mode enabled, IPC is disabled 00:04:11.183 EAL: TSC frequency is ~2700000 KHz 00:04:11.183 EAL: Main lcore 0 is ready (tid=7f115ee15a00;cpuset=[0]) 00:04:11.183 EAL: Trying to obtain current memory policy. 00:04:11.183 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.441 EAL: Restoring previous memory policy: 0 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: Heap on socket 0 was expanded by 2MB 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:11.441 EAL: Mem event callback 'spdk:(nil)' registered 00:04:11.441 00:04:11.441 00:04:11.441 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.441 http://cunit.sourceforge.net/ 00:04:11.441 00:04:11.441 00:04:11.441 Suite: components_suite 00:04:11.441 Test: vtophys_malloc_test ...passed 00:04:11.441 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:11.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.441 EAL: Restoring previous memory policy: 4 00:04:11.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: Heap on socket 0 was expanded by 4MB 00:04:11.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: Heap on socket 0 was shrunk by 4MB 00:04:11.441 EAL: Trying to obtain current memory policy. 00:04:11.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.441 EAL: Restoring previous memory policy: 4 00:04:11.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: Heap on socket 0 was expanded by 6MB 00:04:11.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: Heap on socket 0 was shrunk by 6MB 00:04:11.441 EAL: Trying to obtain current memory policy. 00:04:11.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.441 EAL: Restoring previous memory policy: 4 00:04:11.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: Heap on socket 0 was expanded by 10MB 00:04:11.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: Heap on socket 0 was shrunk by 10MB 00:04:11.441 EAL: Trying to obtain current memory policy. 00:04:11.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.441 EAL: Restoring previous memory policy: 4 00:04:11.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: Heap on socket 0 was expanded by 18MB 00:04:11.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: Heap on socket 0 was shrunk by 18MB 00:04:11.441 EAL: Trying to obtain current memory policy. 00:04:11.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.441 EAL: Restoring previous memory policy: 4 00:04:11.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: Heap on socket 0 was expanded by 34MB 00:04:11.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.441 EAL: No shared files mode enabled, IPC is disabled 00:04:11.441 EAL: Heap on socket 0 was shrunk by 34MB 00:04:11.441 EAL: Trying to obtain current memory policy. 00:04:11.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.441 EAL: Restoring previous memory policy: 4 00:04:11.441 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.441 EAL: request: mp_malloc_sync 00:04:11.442 EAL: No shared files mode enabled, IPC is disabled 00:04:11.442 EAL: Heap on socket 0 was expanded by 66MB 00:04:11.442 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.442 EAL: request: mp_malloc_sync 00:04:11.442 EAL: No shared files mode enabled, IPC is disabled 00:04:11.442 EAL: Heap on socket 0 was shrunk by 66MB 00:04:11.442 EAL: Trying to obtain current memory policy. 00:04:11.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.442 EAL: Restoring previous memory policy: 4 00:04:11.442 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.442 EAL: request: mp_malloc_sync 00:04:11.442 EAL: No shared files mode enabled, IPC is disabled 00:04:11.442 EAL: Heap on socket 0 was expanded by 130MB 00:04:11.442 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.442 EAL: request: mp_malloc_sync 00:04:11.442 EAL: No shared files mode enabled, IPC is disabled 00:04:11.442 EAL: Heap on socket 0 was shrunk by 130MB 00:04:11.442 EAL: Trying to obtain current memory policy. 00:04:11.442 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.442 EAL: Restoring previous memory policy: 4 00:04:11.442 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.442 EAL: request: mp_malloc_sync 00:04:11.442 EAL: No shared files mode enabled, IPC is disabled 00:04:11.442 EAL: Heap on socket 0 was expanded by 258MB 00:04:11.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.700 EAL: request: mp_malloc_sync 00:04:11.700 EAL: No shared files mode enabled, IPC is disabled 00:04:11.700 EAL: Heap on socket 0 was shrunk by 258MB 00:04:11.700 EAL: Trying to obtain current memory policy. 00:04:11.700 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.700 EAL: Restoring previous memory policy: 4 00:04:11.700 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.700 EAL: request: mp_malloc_sync 00:04:11.700 EAL: No shared files mode enabled, IPC is disabled 00:04:11.700 EAL: Heap on socket 0 was expanded by 514MB 00:04:11.958 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.958 EAL: request: mp_malloc_sync 00:04:11.958 EAL: No shared files mode enabled, IPC is disabled 00:04:11.958 EAL: Heap on socket 0 was shrunk by 514MB 00:04:11.958 EAL: Trying to obtain current memory policy. 00:04:11.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.216 EAL: Restoring previous memory policy: 4 00:04:12.216 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.216 EAL: request: mp_malloc_sync 00:04:12.216 EAL: No shared files mode enabled, IPC is disabled 00:04:12.216 EAL: Heap on socket 0 was expanded by 1026MB 00:04:12.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.732 EAL: request: mp_malloc_sync 00:04:12.732 EAL: No shared files mode enabled, IPC is disabled 00:04:12.732 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:12.732 passed 00:04:12.732 00:04:12.732 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.732 suites 1 1 n/a 0 0 00:04:12.732 tests 2 2 2 0 0 00:04:12.732 asserts 497 497 497 0 n/a 00:04:12.732 00:04:12.732 Elapsed time = 1.362 seconds 00:04:12.732 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.732 EAL: request: mp_malloc_sync 00:04:12.732 EAL: No shared files mode enabled, IPC is disabled 00:04:12.732 EAL: Heap on socket 0 was shrunk by 2MB 00:04:12.732 EAL: No shared files mode enabled, IPC is disabled 00:04:12.732 EAL: No shared files mode enabled, IPC is disabled 00:04:12.732 EAL: No shared files mode enabled, IPC is disabled 00:04:12.732 00:04:12.732 real 0m1.492s 00:04:12.732 user 0m0.866s 00:04:12.732 sys 0m0.592s 00:04:12.732 06:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.732 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.732 ************************************ 00:04:12.732 END TEST env_vtophys 00:04:12.732 ************************************ 00:04:12.732 06:41:26 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:12.732 06:41:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.732 06:41:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.732 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.732 ************************************ 00:04:12.732 START TEST env_pci 00:04:12.732 ************************************ 00:04:12.732 06:41:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:12.732 00:04:12.732 00:04:12.732 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.732 http://cunit.sourceforge.net/ 00:04:12.732 00:04:12.732 00:04:12.732 Suite: pci 00:04:12.732 Test: pci_hook ...[2024-05-15 06:41:26.894375] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 371147 has claimed it 00:04:12.732 EAL: Cannot find device (10000:00:01.0) 00:04:12.732 EAL: Failed to attach device on primary process 00:04:12.732 passed 00:04:12.732 00:04:12.732 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.732 suites 1 1 n/a 0 0 00:04:12.732 tests 1 1 1 0 0 00:04:12.732 asserts 25 25 25 0 n/a 00:04:12.732 00:04:12.732 Elapsed time = 0.026 seconds 00:04:12.732 00:04:12.732 real 0m0.039s 00:04:12.732 user 0m0.009s 00:04:12.732 sys 0m0.029s 00:04:12.732 06:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.732 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.732 ************************************ 00:04:12.732 END TEST env_pci 00:04:12.732 ************************************ 00:04:12.732 06:41:26 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:12.732 06:41:26 -- env/env.sh@15 -- # uname 00:04:12.732 06:41:26 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:12.732 06:41:26 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:12.732 06:41:26 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.732 06:41:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:12.732 06:41:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.732 06:41:26 -- common/autotest_common.sh@10 -- # set +x 00:04:12.732 ************************************ 00:04:12.732 START TEST env_dpdk_post_init 00:04:12.732 ************************************ 00:04:12.732 06:41:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.991 EAL: Detected CPU lcores: 48 00:04:12.991 EAL: Detected NUMA nodes: 2 00:04:12.991 EAL: Detected shared linkage of DPDK 00:04:12.991 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.991 EAL: Selected IOVA mode 'VA' 00:04:12.991 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.991 EAL: VFIO support initialized 00:04:12.991 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.991 EAL: Using IOMMU type 1 (Type 1) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:12.991 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:13.249 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:13.249 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:13.249 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:13.249 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:13.814 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:17.095 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:17.095 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:17.353 Starting DPDK initialization... 00:04:17.353 Starting SPDK post initialization... 00:04:17.353 SPDK NVMe probe 00:04:17.353 Attaching to 0000:88:00.0 00:04:17.353 Attached to 0000:88:00.0 00:04:17.353 Cleaning up... 00:04:17.353 00:04:17.353 real 0m4.407s 00:04:17.353 user 0m3.259s 00:04:17.353 sys 0m0.202s 00:04:17.353 06:41:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.353 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:04:17.353 ************************************ 00:04:17.353 END TEST env_dpdk_post_init 00:04:17.353 ************************************ 00:04:17.353 06:41:31 -- env/env.sh@26 -- # uname 00:04:17.353 06:41:31 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:17.353 06:41:31 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.353 06:41:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:17.353 06:41:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:17.353 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:04:17.353 ************************************ 00:04:17.353 START TEST env_mem_callbacks 00:04:17.353 ************************************ 00:04:17.353 06:41:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.353 EAL: Detected CPU lcores: 48 00:04:17.353 EAL: Detected NUMA nodes: 2 00:04:17.353 EAL: Detected shared linkage of DPDK 00:04:17.353 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.353 EAL: Selected IOVA mode 'VA' 00:04:17.353 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.353 EAL: VFIO support initialized 00:04:17.353 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.353 00:04:17.353 00:04:17.353 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.353 http://cunit.sourceforge.net/ 00:04:17.353 00:04:17.353 00:04:17.353 Suite: memory 00:04:17.354 Test: test ... 00:04:17.354 register 0x200000200000 2097152 00:04:17.354 malloc 3145728 00:04:17.354 register 0x200000400000 4194304 00:04:17.354 buf 0x200000500000 len 3145728 PASSED 00:04:17.354 malloc 64 00:04:17.354 buf 0x2000004fff40 len 64 PASSED 00:04:17.354 malloc 4194304 00:04:17.354 register 0x200000800000 6291456 00:04:17.354 buf 0x200000a00000 len 4194304 PASSED 00:04:17.354 free 0x200000500000 3145728 00:04:17.354 free 0x2000004fff40 64 00:04:17.354 unregister 0x200000400000 4194304 PASSED 00:04:17.354 free 0x200000a00000 4194304 00:04:17.354 unregister 0x200000800000 6291456 PASSED 00:04:17.354 malloc 8388608 00:04:17.354 register 0x200000400000 10485760 00:04:17.354 buf 0x200000600000 len 8388608 PASSED 00:04:17.354 free 0x200000600000 8388608 00:04:17.354 unregister 0x200000400000 10485760 PASSED 00:04:17.354 passed 00:04:17.354 00:04:17.354 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.354 suites 1 1 n/a 0 0 00:04:17.354 tests 1 1 1 0 0 00:04:17.354 asserts 15 15 15 0 n/a 00:04:17.354 00:04:17.354 Elapsed time = 0.005 seconds 00:04:17.354 00:04:17.354 real 0m0.054s 00:04:17.354 user 0m0.017s 00:04:17.354 sys 0m0.037s 00:04:17.354 06:41:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.354 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:04:17.354 ************************************ 00:04:17.354 END TEST env_mem_callbacks 00:04:17.354 ************************************ 00:04:17.354 00:04:17.354 real 0m6.330s 00:04:17.354 user 0m4.371s 00:04:17.354 sys 0m1.004s 00:04:17.354 06:41:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.354 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:04:17.354 ************************************ 00:04:17.354 END TEST env 00:04:17.354 ************************************ 00:04:17.354 06:41:31 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:17.354 06:41:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:17.354 06:41:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:17.354 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:04:17.354 ************************************ 00:04:17.354 START TEST rpc 00:04:17.354 ************************************ 00:04:17.354 06:41:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:17.354 * Looking for test storage... 00:04:17.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:17.354 06:41:31 -- rpc/rpc.sh@65 -- # spdk_pid=371808 00:04:17.354 06:41:31 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:17.354 06:41:31 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.354 06:41:31 -- rpc/rpc.sh@67 -- # waitforlisten 371808 00:04:17.354 06:41:31 -- common/autotest_common.sh@819 -- # '[' -z 371808 ']' 00:04:17.354 06:41:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.354 06:41:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:17.354 06:41:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.354 06:41:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:17.354 06:41:31 -- common/autotest_common.sh@10 -- # set +x 00:04:17.354 [2024-05-15 06:41:31.583302] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:17.354 [2024-05-15 06:41:31.583394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid371808 ] 00:04:17.612 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.612 [2024-05-15 06:41:31.651839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.612 [2024-05-15 06:41:31.754896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:17.612 [2024-05-15 06:41:31.755067] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:17.612 [2024-05-15 06:41:31.755084] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 371808' to capture a snapshot of events at runtime. 00:04:17.612 [2024-05-15 06:41:31.755101] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid371808 for offline analysis/debug. 00:04:17.612 [2024-05-15 06:41:31.755129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.546 06:41:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:18.546 06:41:32 -- common/autotest_common.sh@852 -- # return 0 00:04:18.546 06:41:32 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:18.546 06:41:32 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:18.546 06:41:32 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:18.546 06:41:32 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:18.546 06:41:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.546 06:41:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.546 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.546 ************************************ 00:04:18.546 START TEST rpc_integrity 00:04:18.546 ************************************ 00:04:18.546 06:41:32 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:18.546 06:41:32 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:18.546 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.546 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.546 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.546 06:41:32 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:18.546 06:41:32 -- rpc/rpc.sh@13 -- # jq length 00:04:18.546 06:41:32 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:18.546 06:41:32 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:18.546 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.546 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.546 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.546 06:41:32 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:18.546 06:41:32 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:18.546 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.546 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.546 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.546 06:41:32 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:18.546 { 00:04:18.546 "name": "Malloc0", 00:04:18.546 "aliases": [ 00:04:18.546 "cadad54a-155f-48e1-a5dc-e2d8e0c5a82b" 00:04:18.546 ], 00:04:18.546 "product_name": "Malloc disk", 00:04:18.546 "block_size": 512, 00:04:18.546 "num_blocks": 16384, 00:04:18.546 "uuid": "cadad54a-155f-48e1-a5dc-e2d8e0c5a82b", 00:04:18.546 "assigned_rate_limits": { 00:04:18.546 "rw_ios_per_sec": 0, 00:04:18.546 "rw_mbytes_per_sec": 0, 00:04:18.546 "r_mbytes_per_sec": 0, 00:04:18.546 "w_mbytes_per_sec": 0 00:04:18.546 }, 00:04:18.546 "claimed": false, 00:04:18.546 "zoned": false, 00:04:18.546 "supported_io_types": { 00:04:18.546 "read": true, 00:04:18.546 "write": true, 00:04:18.546 "unmap": true, 00:04:18.546 "write_zeroes": true, 00:04:18.546 "flush": true, 00:04:18.546 "reset": true, 00:04:18.546 "compare": false, 00:04:18.546 "compare_and_write": false, 00:04:18.546 "abort": true, 00:04:18.546 "nvme_admin": false, 00:04:18.546 "nvme_io": false 00:04:18.547 }, 00:04:18.547 "memory_domains": [ 00:04:18.547 { 00:04:18.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.547 "dma_device_type": 2 00:04:18.547 } 00:04:18.547 ], 00:04:18.547 "driver_specific": {} 00:04:18.547 } 00:04:18.547 ]' 00:04:18.547 06:41:32 -- rpc/rpc.sh@17 -- # jq length 00:04:18.547 06:41:32 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:18.547 06:41:32 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:18.547 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.547 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.547 [2024-05-15 06:41:32.624997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:18.547 [2024-05-15 06:41:32.625039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:18.547 [2024-05-15 06:41:32.625060] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21a2f60 00:04:18.547 [2024-05-15 06:41:32.625074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:18.547 [2024-05-15 06:41:32.626591] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:18.547 [2024-05-15 06:41:32.626619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:18.547 Passthru0 00:04:18.547 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.547 06:41:32 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:18.547 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.547 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.547 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.547 06:41:32 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:18.547 { 00:04:18.547 "name": "Malloc0", 00:04:18.547 "aliases": [ 00:04:18.547 "cadad54a-155f-48e1-a5dc-e2d8e0c5a82b" 00:04:18.547 ], 00:04:18.547 "product_name": "Malloc disk", 00:04:18.547 "block_size": 512, 00:04:18.547 "num_blocks": 16384, 00:04:18.547 "uuid": "cadad54a-155f-48e1-a5dc-e2d8e0c5a82b", 00:04:18.547 "assigned_rate_limits": { 00:04:18.547 "rw_ios_per_sec": 0, 00:04:18.547 "rw_mbytes_per_sec": 0, 00:04:18.547 "r_mbytes_per_sec": 0, 00:04:18.547 "w_mbytes_per_sec": 0 00:04:18.547 }, 00:04:18.547 "claimed": true, 00:04:18.547 "claim_type": "exclusive_write", 00:04:18.547 "zoned": false, 00:04:18.547 "supported_io_types": { 00:04:18.547 "read": true, 00:04:18.547 "write": true, 00:04:18.547 "unmap": true, 00:04:18.547 "write_zeroes": true, 00:04:18.547 "flush": true, 00:04:18.547 "reset": true, 00:04:18.547 "compare": false, 00:04:18.547 "compare_and_write": false, 00:04:18.547 "abort": true, 00:04:18.547 "nvme_admin": false, 00:04:18.547 "nvme_io": false 00:04:18.547 }, 00:04:18.547 "memory_domains": [ 00:04:18.547 { 00:04:18.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.547 "dma_device_type": 2 00:04:18.547 } 00:04:18.547 ], 00:04:18.547 "driver_specific": {} 00:04:18.547 }, 00:04:18.547 { 00:04:18.547 "name": "Passthru0", 00:04:18.547 "aliases": [ 00:04:18.547 "95da076b-8c56-5faf-9fd2-d8cb1c47ca3f" 00:04:18.547 ], 00:04:18.547 "product_name": "passthru", 00:04:18.547 "block_size": 512, 00:04:18.547 "num_blocks": 16384, 00:04:18.547 "uuid": "95da076b-8c56-5faf-9fd2-d8cb1c47ca3f", 00:04:18.547 "assigned_rate_limits": { 00:04:18.547 "rw_ios_per_sec": 0, 00:04:18.547 "rw_mbytes_per_sec": 0, 00:04:18.547 "r_mbytes_per_sec": 0, 00:04:18.547 "w_mbytes_per_sec": 0 00:04:18.547 }, 00:04:18.547 "claimed": false, 00:04:18.547 "zoned": false, 00:04:18.547 "supported_io_types": { 00:04:18.547 "read": true, 00:04:18.547 "write": true, 00:04:18.547 "unmap": true, 00:04:18.547 "write_zeroes": true, 00:04:18.547 "flush": true, 00:04:18.547 "reset": true, 00:04:18.547 "compare": false, 00:04:18.547 "compare_and_write": false, 00:04:18.547 "abort": true, 00:04:18.547 "nvme_admin": false, 00:04:18.547 "nvme_io": false 00:04:18.547 }, 00:04:18.547 "memory_domains": [ 00:04:18.547 { 00:04:18.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.547 "dma_device_type": 2 00:04:18.547 } 00:04:18.547 ], 00:04:18.547 "driver_specific": { 00:04:18.547 "passthru": { 00:04:18.547 "name": "Passthru0", 00:04:18.547 "base_bdev_name": "Malloc0" 00:04:18.547 } 00:04:18.547 } 00:04:18.547 } 00:04:18.547 ]' 00:04:18.547 06:41:32 -- rpc/rpc.sh@21 -- # jq length 00:04:18.547 06:41:32 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:18.547 06:41:32 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:18.547 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.547 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.547 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.547 06:41:32 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:18.547 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.547 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.547 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.547 06:41:32 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:18.547 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.547 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.547 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.547 06:41:32 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:18.547 06:41:32 -- rpc/rpc.sh@26 -- # jq length 00:04:18.547 06:41:32 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:18.547 00:04:18.547 real 0m0.229s 00:04:18.547 user 0m0.144s 00:04:18.547 sys 0m0.023s 00:04:18.547 06:41:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.547 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.547 ************************************ 00:04:18.547 END TEST rpc_integrity 00:04:18.547 ************************************ 00:04:18.547 06:41:32 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:18.547 06:41:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.547 06:41:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.547 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.547 ************************************ 00:04:18.547 START TEST rpc_plugins 00:04:18.547 ************************************ 00:04:18.547 06:41:32 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:18.547 06:41:32 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:18.547 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.547 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.805 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.805 06:41:32 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:18.805 06:41:32 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:18.805 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.805 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.805 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.805 06:41:32 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:18.805 { 00:04:18.805 "name": "Malloc1", 00:04:18.805 "aliases": [ 00:04:18.805 "ddddbfbe-8ccc-4430-a9b2-5c9eed73e0ba" 00:04:18.805 ], 00:04:18.805 "product_name": "Malloc disk", 00:04:18.805 "block_size": 4096, 00:04:18.805 "num_blocks": 256, 00:04:18.805 "uuid": "ddddbfbe-8ccc-4430-a9b2-5c9eed73e0ba", 00:04:18.805 "assigned_rate_limits": { 00:04:18.805 "rw_ios_per_sec": 0, 00:04:18.805 "rw_mbytes_per_sec": 0, 00:04:18.805 "r_mbytes_per_sec": 0, 00:04:18.805 "w_mbytes_per_sec": 0 00:04:18.805 }, 00:04:18.805 "claimed": false, 00:04:18.805 "zoned": false, 00:04:18.805 "supported_io_types": { 00:04:18.805 "read": true, 00:04:18.805 "write": true, 00:04:18.805 "unmap": true, 00:04:18.805 "write_zeroes": true, 00:04:18.805 "flush": true, 00:04:18.805 "reset": true, 00:04:18.805 "compare": false, 00:04:18.805 "compare_and_write": false, 00:04:18.805 "abort": true, 00:04:18.805 "nvme_admin": false, 00:04:18.805 "nvme_io": false 00:04:18.805 }, 00:04:18.805 "memory_domains": [ 00:04:18.805 { 00:04:18.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.805 "dma_device_type": 2 00:04:18.805 } 00:04:18.805 ], 00:04:18.805 "driver_specific": {} 00:04:18.805 } 00:04:18.805 ]' 00:04:18.805 06:41:32 -- rpc/rpc.sh@32 -- # jq length 00:04:18.805 06:41:32 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:18.805 06:41:32 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:18.805 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.805 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.805 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.805 06:41:32 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:18.805 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.805 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.805 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.805 06:41:32 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:18.805 06:41:32 -- rpc/rpc.sh@36 -- # jq length 00:04:18.805 06:41:32 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:18.805 00:04:18.805 real 0m0.114s 00:04:18.805 user 0m0.076s 00:04:18.805 sys 0m0.007s 00:04:18.805 06:41:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.805 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.805 ************************************ 00:04:18.805 END TEST rpc_plugins 00:04:18.805 ************************************ 00:04:18.805 06:41:32 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:18.805 06:41:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.805 06:41:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.805 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.805 ************************************ 00:04:18.805 START TEST rpc_trace_cmd_test 00:04:18.805 ************************************ 00:04:18.805 06:41:32 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:18.805 06:41:32 -- rpc/rpc.sh@40 -- # local info 00:04:18.805 06:41:32 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:18.805 06:41:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:18.805 06:41:32 -- common/autotest_common.sh@10 -- # set +x 00:04:18.805 06:41:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:18.805 06:41:32 -- rpc/rpc.sh@42 -- # info='{ 00:04:18.805 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid371808", 00:04:18.805 "tpoint_group_mask": "0x8", 00:04:18.805 "iscsi_conn": { 00:04:18.805 "mask": "0x2", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 }, 00:04:18.805 "scsi": { 00:04:18.805 "mask": "0x4", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 }, 00:04:18.805 "bdev": { 00:04:18.805 "mask": "0x8", 00:04:18.805 "tpoint_mask": "0xffffffffffffffff" 00:04:18.805 }, 00:04:18.805 "nvmf_rdma": { 00:04:18.805 "mask": "0x10", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 }, 00:04:18.805 "nvmf_tcp": { 00:04:18.805 "mask": "0x20", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 }, 00:04:18.805 "ftl": { 00:04:18.805 "mask": "0x40", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 }, 00:04:18.805 "blobfs": { 00:04:18.805 "mask": "0x80", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 }, 00:04:18.805 "dsa": { 00:04:18.805 "mask": "0x200", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 }, 00:04:18.805 "thread": { 00:04:18.805 "mask": "0x400", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 }, 00:04:18.805 "nvme_pcie": { 00:04:18.805 "mask": "0x800", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 }, 00:04:18.805 "iaa": { 00:04:18.805 "mask": "0x1000", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 }, 00:04:18.805 "nvme_tcp": { 00:04:18.805 "mask": "0x2000", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 }, 00:04:18.805 "bdev_nvme": { 00:04:18.805 "mask": "0x4000", 00:04:18.805 "tpoint_mask": "0x0" 00:04:18.805 } 00:04:18.805 }' 00:04:18.805 06:41:32 -- rpc/rpc.sh@43 -- # jq length 00:04:18.805 06:41:32 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:18.805 06:41:32 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:18.805 06:41:32 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:18.805 06:41:32 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:18.805 06:41:33 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:18.805 06:41:33 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:19.064 06:41:33 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:19.064 06:41:33 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:19.064 06:41:33 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:19.064 00:04:19.064 real 0m0.190s 00:04:19.064 user 0m0.169s 00:04:19.064 sys 0m0.015s 00:04:19.064 06:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.064 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.064 ************************************ 00:04:19.064 END TEST rpc_trace_cmd_test 00:04:19.064 ************************************ 00:04:19.064 06:41:33 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:19.064 06:41:33 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:19.064 06:41:33 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:19.064 06:41:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:19.064 06:41:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.064 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.064 ************************************ 00:04:19.064 START TEST rpc_daemon_integrity 00:04:19.064 ************************************ 00:04:19.064 06:41:33 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:19.064 06:41:33 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.064 06:41:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.064 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.064 06:41:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.064 06:41:33 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.064 06:41:33 -- rpc/rpc.sh@13 -- # jq length 00:04:19.064 06:41:33 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.064 06:41:33 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.064 06:41:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.064 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.064 06:41:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.064 06:41:33 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:19.064 06:41:33 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.064 06:41:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.064 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.064 06:41:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.064 06:41:33 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.064 { 00:04:19.064 "name": "Malloc2", 00:04:19.064 "aliases": [ 00:04:19.064 "83c8d826-0ee6-4348-b2c0-02fac0d42519" 00:04:19.064 ], 00:04:19.064 "product_name": "Malloc disk", 00:04:19.064 "block_size": 512, 00:04:19.064 "num_blocks": 16384, 00:04:19.064 "uuid": "83c8d826-0ee6-4348-b2c0-02fac0d42519", 00:04:19.064 "assigned_rate_limits": { 00:04:19.064 "rw_ios_per_sec": 0, 00:04:19.064 "rw_mbytes_per_sec": 0, 00:04:19.064 "r_mbytes_per_sec": 0, 00:04:19.064 "w_mbytes_per_sec": 0 00:04:19.064 }, 00:04:19.064 "claimed": false, 00:04:19.064 "zoned": false, 00:04:19.064 "supported_io_types": { 00:04:19.064 "read": true, 00:04:19.064 "write": true, 00:04:19.064 "unmap": true, 00:04:19.064 "write_zeroes": true, 00:04:19.064 "flush": true, 00:04:19.064 "reset": true, 00:04:19.064 "compare": false, 00:04:19.064 "compare_and_write": false, 00:04:19.064 "abort": true, 00:04:19.064 "nvme_admin": false, 00:04:19.064 "nvme_io": false 00:04:19.064 }, 00:04:19.064 "memory_domains": [ 00:04:19.064 { 00:04:19.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.064 "dma_device_type": 2 00:04:19.064 } 00:04:19.064 ], 00:04:19.064 "driver_specific": {} 00:04:19.064 } 00:04:19.064 ]' 00:04:19.064 06:41:33 -- rpc/rpc.sh@17 -- # jq length 00:04:19.064 06:41:33 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.064 06:41:33 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:19.064 06:41:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.064 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.064 [2024-05-15 06:41:33.234734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:19.064 [2024-05-15 06:41:33.234780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.064 [2024-05-15 06:41:33.234807] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21a3990 00:04:19.064 [2024-05-15 06:41:33.234824] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.064 [2024-05-15 06:41:33.236171] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.064 [2024-05-15 06:41:33.236196] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.064 Passthru0 00:04:19.064 06:41:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.064 06:41:33 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.064 06:41:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.064 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.064 06:41:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.064 06:41:33 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.064 { 00:04:19.064 "name": "Malloc2", 00:04:19.064 "aliases": [ 00:04:19.064 "83c8d826-0ee6-4348-b2c0-02fac0d42519" 00:04:19.064 ], 00:04:19.064 "product_name": "Malloc disk", 00:04:19.064 "block_size": 512, 00:04:19.064 "num_blocks": 16384, 00:04:19.064 "uuid": "83c8d826-0ee6-4348-b2c0-02fac0d42519", 00:04:19.064 "assigned_rate_limits": { 00:04:19.064 "rw_ios_per_sec": 0, 00:04:19.064 "rw_mbytes_per_sec": 0, 00:04:19.064 "r_mbytes_per_sec": 0, 00:04:19.064 "w_mbytes_per_sec": 0 00:04:19.064 }, 00:04:19.064 "claimed": true, 00:04:19.064 "claim_type": "exclusive_write", 00:04:19.064 "zoned": false, 00:04:19.064 "supported_io_types": { 00:04:19.064 "read": true, 00:04:19.064 "write": true, 00:04:19.064 "unmap": true, 00:04:19.064 "write_zeroes": true, 00:04:19.064 "flush": true, 00:04:19.064 "reset": true, 00:04:19.064 "compare": false, 00:04:19.064 "compare_and_write": false, 00:04:19.064 "abort": true, 00:04:19.064 "nvme_admin": false, 00:04:19.064 "nvme_io": false 00:04:19.064 }, 00:04:19.064 "memory_domains": [ 00:04:19.064 { 00:04:19.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.064 "dma_device_type": 2 00:04:19.064 } 00:04:19.064 ], 00:04:19.064 "driver_specific": {} 00:04:19.064 }, 00:04:19.064 { 00:04:19.064 "name": "Passthru0", 00:04:19.064 "aliases": [ 00:04:19.064 "5e431314-5ec1-5f05-b3b4-fa9a23457544" 00:04:19.064 ], 00:04:19.064 "product_name": "passthru", 00:04:19.064 "block_size": 512, 00:04:19.064 "num_blocks": 16384, 00:04:19.064 "uuid": "5e431314-5ec1-5f05-b3b4-fa9a23457544", 00:04:19.064 "assigned_rate_limits": { 00:04:19.064 "rw_ios_per_sec": 0, 00:04:19.064 "rw_mbytes_per_sec": 0, 00:04:19.064 "r_mbytes_per_sec": 0, 00:04:19.064 "w_mbytes_per_sec": 0 00:04:19.064 }, 00:04:19.064 "claimed": false, 00:04:19.064 "zoned": false, 00:04:19.064 "supported_io_types": { 00:04:19.064 "read": true, 00:04:19.064 "write": true, 00:04:19.064 "unmap": true, 00:04:19.064 "write_zeroes": true, 00:04:19.064 "flush": true, 00:04:19.064 "reset": true, 00:04:19.064 "compare": false, 00:04:19.064 "compare_and_write": false, 00:04:19.064 "abort": true, 00:04:19.064 "nvme_admin": false, 00:04:19.064 "nvme_io": false 00:04:19.064 }, 00:04:19.064 "memory_domains": [ 00:04:19.064 { 00:04:19.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.064 "dma_device_type": 2 00:04:19.064 } 00:04:19.064 ], 00:04:19.064 "driver_specific": { 00:04:19.064 "passthru": { 00:04:19.064 "name": "Passthru0", 00:04:19.064 "base_bdev_name": "Malloc2" 00:04:19.064 } 00:04:19.064 } 00:04:19.064 } 00:04:19.064 ]' 00:04:19.064 06:41:33 -- rpc/rpc.sh@21 -- # jq length 00:04:19.064 06:41:33 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.064 06:41:33 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.064 06:41:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.064 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.322 06:41:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.323 06:41:33 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:19.323 06:41:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.323 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.323 06:41:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.323 06:41:33 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.323 06:41:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:19.323 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.323 06:41:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:19.323 06:41:33 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.323 06:41:33 -- rpc/rpc.sh@26 -- # jq length 00:04:19.323 06:41:33 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.323 00:04:19.323 real 0m0.231s 00:04:19.323 user 0m0.151s 00:04:19.323 sys 0m0.020s 00:04:19.323 06:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.323 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.323 ************************************ 00:04:19.323 END TEST rpc_daemon_integrity 00:04:19.323 ************************************ 00:04:19.323 06:41:33 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:19.323 06:41:33 -- rpc/rpc.sh@84 -- # killprocess 371808 00:04:19.323 06:41:33 -- common/autotest_common.sh@926 -- # '[' -z 371808 ']' 00:04:19.323 06:41:33 -- common/autotest_common.sh@930 -- # kill -0 371808 00:04:19.323 06:41:33 -- common/autotest_common.sh@931 -- # uname 00:04:19.323 06:41:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:19.323 06:41:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 371808 00:04:19.323 06:41:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:19.323 06:41:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:19.323 06:41:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 371808' 00:04:19.323 killing process with pid 371808 00:04:19.323 06:41:33 -- common/autotest_common.sh@945 -- # kill 371808 00:04:19.323 06:41:33 -- common/autotest_common.sh@950 -- # wait 371808 00:04:19.915 00:04:19.915 real 0m2.372s 00:04:19.915 user 0m3.010s 00:04:19.915 sys 0m0.551s 00:04:19.915 06:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.915 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.915 ************************************ 00:04:19.915 END TEST rpc 00:04:19.915 ************************************ 00:04:19.915 06:41:33 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:19.915 06:41:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:19.915 06:41:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.915 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.915 ************************************ 00:04:19.915 START TEST rpc_client 00:04:19.915 ************************************ 00:04:19.915 06:41:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:19.915 * Looking for test storage... 00:04:19.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:19.915 06:41:33 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:19.915 OK 00:04:19.915 06:41:33 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:19.915 00:04:19.915 real 0m0.068s 00:04:19.915 user 0m0.024s 00:04:19.915 sys 0m0.050s 00:04:19.915 06:41:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.915 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.915 ************************************ 00:04:19.915 END TEST rpc_client 00:04:19.915 ************************************ 00:04:19.915 06:41:33 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:19.915 06:41:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:19.915 06:41:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.915 06:41:33 -- common/autotest_common.sh@10 -- # set +x 00:04:19.915 ************************************ 00:04:19.915 START TEST json_config 00:04:19.915 ************************************ 00:04:19.915 06:41:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:19.915 06:41:34 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:19.915 06:41:34 -- nvmf/common.sh@7 -- # uname -s 00:04:19.915 06:41:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.915 06:41:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.915 06:41:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.915 06:41:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.915 06:41:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.915 06:41:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.915 06:41:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.915 06:41:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.915 06:41:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.915 06:41:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.915 06:41:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:19.915 06:41:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:19.915 06:41:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.915 06:41:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.915 06:41:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:19.915 06:41:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:19.915 06:41:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.915 06:41:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.916 06:41:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.916 06:41:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.916 06:41:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.916 06:41:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.916 06:41:34 -- paths/export.sh@5 -- # export PATH 00:04:19.916 06:41:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.916 06:41:34 -- nvmf/common.sh@46 -- # : 0 00:04:19.916 06:41:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:19.916 06:41:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:19.916 06:41:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:19.916 06:41:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.916 06:41:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.916 06:41:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:19.916 06:41:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:19.916 06:41:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:19.916 06:41:34 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:19.916 06:41:34 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:19.916 06:41:34 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:19.916 06:41:34 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:19.916 06:41:34 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:19.916 06:41:34 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:19.916 06:41:34 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:19.916 06:41:34 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:19.916 06:41:34 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:19.916 06:41:34 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:19.916 06:41:34 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:19.916 06:41:34 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:19.916 06:41:34 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:19.916 06:41:34 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:19.916 06:41:34 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:19.916 INFO: JSON configuration test init 00:04:19.916 06:41:34 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:19.916 06:41:34 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:19.916 06:41:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:19.916 06:41:34 -- common/autotest_common.sh@10 -- # set +x 00:04:19.916 06:41:34 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:19.916 06:41:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:19.916 06:41:34 -- common/autotest_common.sh@10 -- # set +x 00:04:19.916 06:41:34 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:19.916 06:41:34 -- json_config/json_config.sh@98 -- # local app=target 00:04:19.916 06:41:34 -- json_config/json_config.sh@99 -- # shift 00:04:19.916 06:41:34 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:19.916 06:41:34 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:19.916 06:41:34 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:19.916 06:41:34 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:19.916 06:41:34 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:19.916 06:41:34 -- json_config/json_config.sh@111 -- # app_pid[$app]=372288 00:04:19.916 06:41:34 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:19.916 06:41:34 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:19.916 Waiting for target to run... 00:04:19.916 06:41:34 -- json_config/json_config.sh@114 -- # waitforlisten 372288 /var/tmp/spdk_tgt.sock 00:04:19.916 06:41:34 -- common/autotest_common.sh@819 -- # '[' -z 372288 ']' 00:04:19.916 06:41:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:19.916 06:41:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:19.916 06:41:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:19.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:19.916 06:41:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:19.916 06:41:34 -- common/autotest_common.sh@10 -- # set +x 00:04:19.916 [2024-05-15 06:41:34.071900] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:19.916 [2024-05-15 06:41:34.072012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid372288 ] 00:04:19.916 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.482 [2024-05-15 06:41:34.599377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.482 [2024-05-15 06:41:34.700675] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:20.482 [2024-05-15 06:41:34.700863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.047 06:41:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:21.047 06:41:34 -- common/autotest_common.sh@852 -- # return 0 00:04:21.047 06:41:34 -- json_config/json_config.sh@115 -- # echo '' 00:04:21.047 00:04:21.047 06:41:34 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:21.047 06:41:34 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:21.047 06:41:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:21.047 06:41:34 -- common/autotest_common.sh@10 -- # set +x 00:04:21.047 06:41:34 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:21.047 06:41:34 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:21.047 06:41:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:21.047 06:41:34 -- common/autotest_common.sh@10 -- # set +x 00:04:21.047 06:41:35 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:21.047 06:41:35 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:21.047 06:41:35 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:24.330 06:41:38 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:24.330 06:41:38 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:24.330 06:41:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:24.330 06:41:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.330 06:41:38 -- json_config/json_config.sh@48 -- # local ret=0 00:04:24.330 06:41:38 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:24.330 06:41:38 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:24.330 06:41:38 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:24.330 06:41:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:24.330 06:41:38 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:24.330 06:41:38 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:24.330 06:41:38 -- json_config/json_config.sh@51 -- # local get_types 00:04:24.330 06:41:38 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:24.330 06:41:38 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:24.330 06:41:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:24.330 06:41:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.330 06:41:38 -- json_config/json_config.sh@58 -- # return 0 00:04:24.330 06:41:38 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:24.330 06:41:38 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:24.330 06:41:38 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:24.330 06:41:38 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:24.330 06:41:38 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:24.330 06:41:38 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:24.330 06:41:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:24.330 06:41:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.330 06:41:38 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:24.330 06:41:38 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:24.330 06:41:38 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:24.330 06:41:38 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:24.330 06:41:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:24.588 MallocForNvmf0 00:04:24.588 06:41:38 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:24.588 06:41:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:24.845 MallocForNvmf1 00:04:24.845 06:41:38 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:24.845 06:41:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:25.103 [2024-05-15 06:41:39.130281] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.103 06:41:39 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:25.103 06:41:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:25.361 06:41:39 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:25.361 06:41:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:25.619 06:41:39 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:25.619 06:41:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:25.619 06:41:39 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:25.619 06:41:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:25.877 [2024-05-15 06:41:40.073531] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:25.877 06:41:40 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:25.877 06:41:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:25.877 06:41:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.135 06:41:40 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:26.135 06:41:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:26.136 06:41:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.136 06:41:40 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:26.136 06:41:40 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:26.136 06:41:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:26.394 MallocBdevForConfigChangeCheck 00:04:26.394 06:41:40 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:26.394 06:41:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:26.394 06:41:40 -- common/autotest_common.sh@10 -- # set +x 00:04:26.394 06:41:40 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:26.394 06:41:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.652 06:41:40 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:26.652 INFO: shutting down applications... 00:04:26.652 06:41:40 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:26.652 06:41:40 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:26.652 06:41:40 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:26.652 06:41:40 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:28.552 Calling clear_iscsi_subsystem 00:04:28.552 Calling clear_nvmf_subsystem 00:04:28.552 Calling clear_nbd_subsystem 00:04:28.552 Calling clear_ublk_subsystem 00:04:28.552 Calling clear_vhost_blk_subsystem 00:04:28.552 Calling clear_vhost_scsi_subsystem 00:04:28.552 Calling clear_scheduler_subsystem 00:04:28.552 Calling clear_bdev_subsystem 00:04:28.552 Calling clear_accel_subsystem 00:04:28.552 Calling clear_vmd_subsystem 00:04:28.552 Calling clear_sock_subsystem 00:04:28.552 Calling clear_iobuf_subsystem 00:04:28.552 06:41:42 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:28.552 06:41:42 -- json_config/json_config.sh@396 -- # count=100 00:04:28.552 06:41:42 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:28.552 06:41:42 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.552 06:41:42 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:28.552 06:41:42 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:28.810 06:41:42 -- json_config/json_config.sh@398 -- # break 00:04:28.810 06:41:42 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:28.810 06:41:42 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:28.810 06:41:42 -- json_config/json_config.sh@120 -- # local app=target 00:04:28.810 06:41:42 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:28.810 06:41:42 -- json_config/json_config.sh@124 -- # [[ -n 372288 ]] 00:04:28.810 06:41:42 -- json_config/json_config.sh@127 -- # kill -SIGINT 372288 00:04:28.810 06:41:42 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:28.810 06:41:42 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:28.810 06:41:42 -- json_config/json_config.sh@130 -- # kill -0 372288 00:04:28.810 06:41:42 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:29.390 06:41:43 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:29.390 06:41:43 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:29.390 06:41:43 -- json_config/json_config.sh@130 -- # kill -0 372288 00:04:29.390 06:41:43 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:29.390 06:41:43 -- json_config/json_config.sh@132 -- # break 00:04:29.390 06:41:43 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:29.390 06:41:43 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:29.390 SPDK target shutdown done 00:04:29.390 06:41:43 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:29.390 INFO: relaunching applications... 00:04:29.390 06:41:43 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.390 06:41:43 -- json_config/json_config.sh@98 -- # local app=target 00:04:29.390 06:41:43 -- json_config/json_config.sh@99 -- # shift 00:04:29.390 06:41:43 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:29.390 06:41:43 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:29.390 06:41:43 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:29.390 06:41:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:29.390 06:41:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:29.390 06:41:43 -- json_config/json_config.sh@111 -- # app_pid[$app]=373518 00:04:29.390 06:41:43 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.390 06:41:43 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:29.390 Waiting for target to run... 00:04:29.390 06:41:43 -- json_config/json_config.sh@114 -- # waitforlisten 373518 /var/tmp/spdk_tgt.sock 00:04:29.390 06:41:43 -- common/autotest_common.sh@819 -- # '[' -z 373518 ']' 00:04:29.390 06:41:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:29.390 06:41:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:29.390 06:41:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:29.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:29.390 06:41:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:29.390 06:41:43 -- common/autotest_common.sh@10 -- # set +x 00:04:29.390 [2024-05-15 06:41:43.363654] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:29.390 [2024-05-15 06:41:43.363752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373518 ] 00:04:29.390 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.648 [2024-05-15 06:41:43.852580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.906 [2024-05-15 06:41:43.956347] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:29.906 [2024-05-15 06:41:43.956532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.188 [2024-05-15 06:41:46.997765] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.188 [2024-05-15 06:41:47.030250] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:33.188 06:41:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:33.188 06:41:47 -- common/autotest_common.sh@852 -- # return 0 00:04:33.188 06:41:47 -- json_config/json_config.sh@115 -- # echo '' 00:04:33.188 00:04:33.188 06:41:47 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:33.188 06:41:47 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:33.188 INFO: Checking if target configuration is the same... 00:04:33.188 06:41:47 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.188 06:41:47 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:33.188 06:41:47 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.188 + '[' 2 -ne 2 ']' 00:04:33.188 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:33.188 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:33.188 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:33.188 +++ basename /dev/fd/62 00:04:33.188 ++ mktemp /tmp/62.XXX 00:04:33.188 + tmp_file_1=/tmp/62.F70 00:04:33.188 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.188 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:33.188 + tmp_file_2=/tmp/spdk_tgt_config.json.uyb 00:04:33.188 + ret=0 00:04:33.188 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.446 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.446 + diff -u /tmp/62.F70 /tmp/spdk_tgt_config.json.uyb 00:04:33.446 + echo 'INFO: JSON config files are the same' 00:04:33.446 INFO: JSON config files are the same 00:04:33.446 + rm /tmp/62.F70 /tmp/spdk_tgt_config.json.uyb 00:04:33.446 + exit 0 00:04:33.446 06:41:47 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:33.446 06:41:47 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:33.446 INFO: changing configuration and checking if this can be detected... 00:04:33.446 06:41:47 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:33.446 06:41:47 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:33.704 06:41:47 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.704 06:41:47 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:33.704 06:41:47 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.704 + '[' 2 -ne 2 ']' 00:04:33.704 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:33.704 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:33.704 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:33.704 +++ basename /dev/fd/62 00:04:33.704 ++ mktemp /tmp/62.XXX 00:04:33.704 + tmp_file_1=/tmp/62.dgS 00:04:33.704 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.704 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:33.704 + tmp_file_2=/tmp/spdk_tgt_config.json.jeN 00:04:33.704 + ret=0 00:04:33.704 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.963 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:34.221 + diff -u /tmp/62.dgS /tmp/spdk_tgt_config.json.jeN 00:04:34.221 + ret=1 00:04:34.221 + echo '=== Start of file: /tmp/62.dgS ===' 00:04:34.221 + cat /tmp/62.dgS 00:04:34.221 + echo '=== End of file: /tmp/62.dgS ===' 00:04:34.221 + echo '' 00:04:34.221 + echo '=== Start of file: /tmp/spdk_tgt_config.json.jeN ===' 00:04:34.221 + cat /tmp/spdk_tgt_config.json.jeN 00:04:34.221 + echo '=== End of file: /tmp/spdk_tgt_config.json.jeN ===' 00:04:34.221 + echo '' 00:04:34.221 + rm /tmp/62.dgS /tmp/spdk_tgt_config.json.jeN 00:04:34.221 + exit 1 00:04:34.221 06:41:48 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:34.221 INFO: configuration change detected. 00:04:34.221 06:41:48 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:34.221 06:41:48 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:34.221 06:41:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:34.221 06:41:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.221 06:41:48 -- json_config/json_config.sh@360 -- # local ret=0 00:04:34.221 06:41:48 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:34.221 06:41:48 -- json_config/json_config.sh@370 -- # [[ -n 373518 ]] 00:04:34.221 06:41:48 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:34.221 06:41:48 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:34.221 06:41:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:34.221 06:41:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.221 06:41:48 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:34.221 06:41:48 -- json_config/json_config.sh@246 -- # uname -s 00:04:34.221 06:41:48 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:34.221 06:41:48 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:34.221 06:41:48 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:34.221 06:41:48 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:34.221 06:41:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:34.221 06:41:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.221 06:41:48 -- json_config/json_config.sh@376 -- # killprocess 373518 00:04:34.221 06:41:48 -- common/autotest_common.sh@926 -- # '[' -z 373518 ']' 00:04:34.221 06:41:48 -- common/autotest_common.sh@930 -- # kill -0 373518 00:04:34.221 06:41:48 -- common/autotest_common.sh@931 -- # uname 00:04:34.221 06:41:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:34.221 06:41:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 373518 00:04:34.221 06:41:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:34.221 06:41:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:34.221 06:41:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 373518' 00:04:34.221 killing process with pid 373518 00:04:34.221 06:41:48 -- common/autotest_common.sh@945 -- # kill 373518 00:04:34.221 06:41:48 -- common/autotest_common.sh@950 -- # wait 373518 00:04:36.122 06:41:49 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.122 06:41:49 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:36.122 06:41:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:36.122 06:41:49 -- common/autotest_common.sh@10 -- # set +x 00:04:36.122 06:41:49 -- json_config/json_config.sh@381 -- # return 0 00:04:36.122 06:41:49 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:36.122 INFO: Success 00:04:36.122 00:04:36.122 real 0m15.982s 00:04:36.122 user 0m18.007s 00:04:36.122 sys 0m2.259s 00:04:36.122 06:41:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.122 06:41:49 -- common/autotest_common.sh@10 -- # set +x 00:04:36.122 ************************************ 00:04:36.122 END TEST json_config 00:04:36.122 ************************************ 00:04:36.122 06:41:49 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:36.122 06:41:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.122 06:41:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.122 06:41:49 -- common/autotest_common.sh@10 -- # set +x 00:04:36.122 ************************************ 00:04:36.122 START TEST json_config_extra_key 00:04:36.122 ************************************ 00:04:36.122 06:41:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:36.122 06:41:50 -- nvmf/common.sh@7 -- # uname -s 00:04:36.122 06:41:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.122 06:41:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.122 06:41:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.122 06:41:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.122 06:41:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.122 06:41:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.122 06:41:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.122 06:41:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.122 06:41:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.122 06:41:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.122 06:41:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:36.122 06:41:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:36.122 06:41:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.122 06:41:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.122 06:41:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.122 06:41:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:36.122 06:41:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.122 06:41:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.122 06:41:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.122 06:41:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.122 06:41:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.122 06:41:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.122 06:41:50 -- paths/export.sh@5 -- # export PATH 00:04:36.122 06:41:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.122 06:41:50 -- nvmf/common.sh@46 -- # : 0 00:04:36.122 06:41:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:36.122 06:41:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:36.122 06:41:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:36.122 06:41:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.122 06:41:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.122 06:41:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:36.122 06:41:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:36.122 06:41:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:36.122 INFO: launching applications... 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:36.122 06:41:50 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:36.123 06:41:50 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:36.123 06:41:50 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:36.123 06:41:50 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=374450 00:04:36.123 06:41:50 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:36.123 06:41:50 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:36.123 Waiting for target to run... 00:04:36.123 06:41:50 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 374450 /var/tmp/spdk_tgt.sock 00:04:36.123 06:41:50 -- common/autotest_common.sh@819 -- # '[' -z 374450 ']' 00:04:36.123 06:41:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.123 06:41:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:36.123 06:41:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.123 06:41:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:36.123 06:41:50 -- common/autotest_common.sh@10 -- # set +x 00:04:36.123 [2024-05-15 06:41:50.081834] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:36.123 [2024-05-15 06:41:50.081945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374450 ] 00:04:36.123 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.381 [2024-05-15 06:41:50.449145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.381 [2024-05-15 06:41:50.536843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:36.381 [2024-05-15 06:41:50.537035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.978 06:41:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:36.978 06:41:51 -- common/autotest_common.sh@852 -- # return 0 00:04:36.978 06:41:51 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:36.978 00:04:36.978 06:41:51 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:36.978 INFO: shutting down applications... 00:04:36.978 06:41:51 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:36.978 06:41:51 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:36.978 06:41:51 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:36.978 06:41:51 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 374450 ]] 00:04:36.978 06:41:51 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 374450 00:04:36.978 06:41:51 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:36.978 06:41:51 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:36.978 06:41:51 -- json_config/json_config_extra_key.sh@50 -- # kill -0 374450 00:04:36.978 06:41:51 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:37.545 06:41:51 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:37.545 06:41:51 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:37.545 06:41:51 -- json_config/json_config_extra_key.sh@50 -- # kill -0 374450 00:04:37.545 06:41:51 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:37.545 06:41:51 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:37.545 06:41:51 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:37.545 06:41:51 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:37.545 SPDK target shutdown done 00:04:37.545 06:41:51 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:37.545 Success 00:04:37.545 00:04:37.545 real 0m1.540s 00:04:37.545 user 0m1.526s 00:04:37.545 sys 0m0.447s 00:04:37.545 06:41:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.545 06:41:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.545 ************************************ 00:04:37.545 END TEST json_config_extra_key 00:04:37.545 ************************************ 00:04:37.545 06:41:51 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.545 06:41:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.545 06:41:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.545 06:41:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.545 ************************************ 00:04:37.545 START TEST alias_rpc 00:04:37.545 ************************************ 00:04:37.545 06:41:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.545 * Looking for test storage... 00:04:37.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:37.545 06:41:51 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:37.545 06:41:51 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=374755 00:04:37.545 06:41:51 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.545 06:41:51 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 374755 00:04:37.545 06:41:51 -- common/autotest_common.sh@819 -- # '[' -z 374755 ']' 00:04:37.545 06:41:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.545 06:41:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:37.545 06:41:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.545 06:41:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:37.545 06:41:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.545 [2024-05-15 06:41:51.650235] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:37.545 [2024-05-15 06:41:51.650331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374755 ] 00:04:37.545 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.545 [2024-05-15 06:41:51.719491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.804 [2024-05-15 06:41:51.827239] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:37.804 [2024-05-15 06:41:51.827396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.370 06:41:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:38.370 06:41:52 -- common/autotest_common.sh@852 -- # return 0 00:04:38.370 06:41:52 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:38.628 06:41:52 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 374755 00:04:38.628 06:41:52 -- common/autotest_common.sh@926 -- # '[' -z 374755 ']' 00:04:38.628 06:41:52 -- common/autotest_common.sh@930 -- # kill -0 374755 00:04:38.628 06:41:52 -- common/autotest_common.sh@931 -- # uname 00:04:38.628 06:41:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:38.628 06:41:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 374755 00:04:38.886 06:41:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:38.886 06:41:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:38.886 06:41:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 374755' 00:04:38.886 killing process with pid 374755 00:04:38.886 06:41:52 -- common/autotest_common.sh@945 -- # kill 374755 00:04:38.886 06:41:52 -- common/autotest_common.sh@950 -- # wait 374755 00:04:39.144 00:04:39.144 real 0m1.785s 00:04:39.144 user 0m2.026s 00:04:39.144 sys 0m0.469s 00:04:39.144 06:41:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.144 06:41:53 -- common/autotest_common.sh@10 -- # set +x 00:04:39.144 ************************************ 00:04:39.144 END TEST alias_rpc 00:04:39.144 ************************************ 00:04:39.144 06:41:53 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:39.144 06:41:53 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:39.144 06:41:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.144 06:41:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.144 06:41:53 -- common/autotest_common.sh@10 -- # set +x 00:04:39.144 ************************************ 00:04:39.144 START TEST spdkcli_tcp 00:04:39.144 ************************************ 00:04:39.144 06:41:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:39.402 * Looking for test storage... 00:04:39.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:39.402 06:41:53 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:39.402 06:41:53 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:39.402 06:41:53 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:39.402 06:41:53 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:39.402 06:41:53 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:39.402 06:41:53 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:39.402 06:41:53 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:39.402 06:41:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:39.402 06:41:53 -- common/autotest_common.sh@10 -- # set +x 00:04:39.402 06:41:53 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=374966 00:04:39.402 06:41:53 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:39.402 06:41:53 -- spdkcli/tcp.sh@27 -- # waitforlisten 374966 00:04:39.402 06:41:53 -- common/autotest_common.sh@819 -- # '[' -z 374966 ']' 00:04:39.402 06:41:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.402 06:41:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:39.402 06:41:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.402 06:41:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:39.402 06:41:53 -- common/autotest_common.sh@10 -- # set +x 00:04:39.402 [2024-05-15 06:41:53.465468] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:39.402 [2024-05-15 06:41:53.465557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid374966 ] 00:04:39.402 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.402 [2024-05-15 06:41:53.532175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.660 [2024-05-15 06:41:53.637713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:39.660 [2024-05-15 06:41:53.637924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.660 [2024-05-15 06:41:53.637938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.226 06:41:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:40.226 06:41:54 -- common/autotest_common.sh@852 -- # return 0 00:04:40.226 06:41:54 -- spdkcli/tcp.sh@31 -- # socat_pid=375109 00:04:40.226 06:41:54 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:40.226 06:41:54 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:40.484 [ 00:04:40.484 "bdev_malloc_delete", 00:04:40.484 "bdev_malloc_create", 00:04:40.484 "bdev_null_resize", 00:04:40.484 "bdev_null_delete", 00:04:40.484 "bdev_null_create", 00:04:40.484 "bdev_nvme_cuse_unregister", 00:04:40.484 "bdev_nvme_cuse_register", 00:04:40.484 "bdev_opal_new_user", 00:04:40.484 "bdev_opal_set_lock_state", 00:04:40.484 "bdev_opal_delete", 00:04:40.484 "bdev_opal_get_info", 00:04:40.484 "bdev_opal_create", 00:04:40.484 "bdev_nvme_opal_revert", 00:04:40.484 "bdev_nvme_opal_init", 00:04:40.484 "bdev_nvme_send_cmd", 00:04:40.484 "bdev_nvme_get_path_iostat", 00:04:40.484 "bdev_nvme_get_mdns_discovery_info", 00:04:40.484 "bdev_nvme_stop_mdns_discovery", 00:04:40.484 "bdev_nvme_start_mdns_discovery", 00:04:40.484 "bdev_nvme_set_multipath_policy", 00:04:40.484 "bdev_nvme_set_preferred_path", 00:04:40.484 "bdev_nvme_get_io_paths", 00:04:40.484 "bdev_nvme_remove_error_injection", 00:04:40.484 "bdev_nvme_add_error_injection", 00:04:40.484 "bdev_nvme_get_discovery_info", 00:04:40.484 "bdev_nvme_stop_discovery", 00:04:40.484 "bdev_nvme_start_discovery", 00:04:40.484 "bdev_nvme_get_controller_health_info", 00:04:40.484 "bdev_nvme_disable_controller", 00:04:40.484 "bdev_nvme_enable_controller", 00:04:40.484 "bdev_nvme_reset_controller", 00:04:40.484 "bdev_nvme_get_transport_statistics", 00:04:40.484 "bdev_nvme_apply_firmware", 00:04:40.484 "bdev_nvme_detach_controller", 00:04:40.484 "bdev_nvme_get_controllers", 00:04:40.484 "bdev_nvme_attach_controller", 00:04:40.484 "bdev_nvme_set_hotplug", 00:04:40.484 "bdev_nvme_set_options", 00:04:40.484 "bdev_passthru_delete", 00:04:40.484 "bdev_passthru_create", 00:04:40.484 "bdev_lvol_grow_lvstore", 00:04:40.484 "bdev_lvol_get_lvols", 00:04:40.484 "bdev_lvol_get_lvstores", 00:04:40.484 "bdev_lvol_delete", 00:04:40.484 "bdev_lvol_set_read_only", 00:04:40.484 "bdev_lvol_resize", 00:04:40.484 "bdev_lvol_decouple_parent", 00:04:40.484 "bdev_lvol_inflate", 00:04:40.484 "bdev_lvol_rename", 00:04:40.484 "bdev_lvol_clone_bdev", 00:04:40.484 "bdev_lvol_clone", 00:04:40.484 "bdev_lvol_snapshot", 00:04:40.484 "bdev_lvol_create", 00:04:40.484 "bdev_lvol_delete_lvstore", 00:04:40.484 "bdev_lvol_rename_lvstore", 00:04:40.484 "bdev_lvol_create_lvstore", 00:04:40.484 "bdev_raid_set_options", 00:04:40.484 "bdev_raid_remove_base_bdev", 00:04:40.484 "bdev_raid_add_base_bdev", 00:04:40.484 "bdev_raid_delete", 00:04:40.484 "bdev_raid_create", 00:04:40.484 "bdev_raid_get_bdevs", 00:04:40.484 "bdev_error_inject_error", 00:04:40.484 "bdev_error_delete", 00:04:40.484 "bdev_error_create", 00:04:40.484 "bdev_split_delete", 00:04:40.484 "bdev_split_create", 00:04:40.484 "bdev_delay_delete", 00:04:40.484 "bdev_delay_create", 00:04:40.484 "bdev_delay_update_latency", 00:04:40.484 "bdev_zone_block_delete", 00:04:40.484 "bdev_zone_block_create", 00:04:40.484 "blobfs_create", 00:04:40.484 "blobfs_detect", 00:04:40.484 "blobfs_set_cache_size", 00:04:40.484 "bdev_aio_delete", 00:04:40.484 "bdev_aio_rescan", 00:04:40.484 "bdev_aio_create", 00:04:40.484 "bdev_ftl_set_property", 00:04:40.484 "bdev_ftl_get_properties", 00:04:40.484 "bdev_ftl_get_stats", 00:04:40.484 "bdev_ftl_unmap", 00:04:40.484 "bdev_ftl_unload", 00:04:40.484 "bdev_ftl_delete", 00:04:40.484 "bdev_ftl_load", 00:04:40.484 "bdev_ftl_create", 00:04:40.484 "bdev_virtio_attach_controller", 00:04:40.484 "bdev_virtio_scsi_get_devices", 00:04:40.484 "bdev_virtio_detach_controller", 00:04:40.484 "bdev_virtio_blk_set_hotplug", 00:04:40.484 "bdev_iscsi_delete", 00:04:40.484 "bdev_iscsi_create", 00:04:40.484 "bdev_iscsi_set_options", 00:04:40.484 "accel_error_inject_error", 00:04:40.484 "ioat_scan_accel_module", 00:04:40.484 "dsa_scan_accel_module", 00:04:40.484 "iaa_scan_accel_module", 00:04:40.484 "iscsi_set_options", 00:04:40.484 "iscsi_get_auth_groups", 00:04:40.484 "iscsi_auth_group_remove_secret", 00:04:40.484 "iscsi_auth_group_add_secret", 00:04:40.484 "iscsi_delete_auth_group", 00:04:40.484 "iscsi_create_auth_group", 00:04:40.484 "iscsi_set_discovery_auth", 00:04:40.484 "iscsi_get_options", 00:04:40.484 "iscsi_target_node_request_logout", 00:04:40.484 "iscsi_target_node_set_redirect", 00:04:40.484 "iscsi_target_node_set_auth", 00:04:40.484 "iscsi_target_node_add_lun", 00:04:40.484 "iscsi_get_connections", 00:04:40.484 "iscsi_portal_group_set_auth", 00:04:40.484 "iscsi_start_portal_group", 00:04:40.484 "iscsi_delete_portal_group", 00:04:40.484 "iscsi_create_portal_group", 00:04:40.484 "iscsi_get_portal_groups", 00:04:40.484 "iscsi_delete_target_node", 00:04:40.484 "iscsi_target_node_remove_pg_ig_maps", 00:04:40.484 "iscsi_target_node_add_pg_ig_maps", 00:04:40.484 "iscsi_create_target_node", 00:04:40.484 "iscsi_get_target_nodes", 00:04:40.484 "iscsi_delete_initiator_group", 00:04:40.484 "iscsi_initiator_group_remove_initiators", 00:04:40.484 "iscsi_initiator_group_add_initiators", 00:04:40.484 "iscsi_create_initiator_group", 00:04:40.484 "iscsi_get_initiator_groups", 00:04:40.484 "nvmf_set_crdt", 00:04:40.484 "nvmf_set_config", 00:04:40.484 "nvmf_set_max_subsystems", 00:04:40.484 "nvmf_subsystem_get_listeners", 00:04:40.484 "nvmf_subsystem_get_qpairs", 00:04:40.484 "nvmf_subsystem_get_controllers", 00:04:40.484 "nvmf_get_stats", 00:04:40.484 "nvmf_get_transports", 00:04:40.484 "nvmf_create_transport", 00:04:40.484 "nvmf_get_targets", 00:04:40.484 "nvmf_delete_target", 00:04:40.484 "nvmf_create_target", 00:04:40.484 "nvmf_subsystem_allow_any_host", 00:04:40.484 "nvmf_subsystem_remove_host", 00:04:40.484 "nvmf_subsystem_add_host", 00:04:40.484 "nvmf_subsystem_remove_ns", 00:04:40.484 "nvmf_subsystem_add_ns", 00:04:40.484 "nvmf_subsystem_listener_set_ana_state", 00:04:40.485 "nvmf_discovery_get_referrals", 00:04:40.485 "nvmf_discovery_remove_referral", 00:04:40.485 "nvmf_discovery_add_referral", 00:04:40.485 "nvmf_subsystem_remove_listener", 00:04:40.485 "nvmf_subsystem_add_listener", 00:04:40.485 "nvmf_delete_subsystem", 00:04:40.485 "nvmf_create_subsystem", 00:04:40.485 "nvmf_get_subsystems", 00:04:40.485 "env_dpdk_get_mem_stats", 00:04:40.485 "nbd_get_disks", 00:04:40.485 "nbd_stop_disk", 00:04:40.485 "nbd_start_disk", 00:04:40.485 "ublk_recover_disk", 00:04:40.485 "ublk_get_disks", 00:04:40.485 "ublk_stop_disk", 00:04:40.485 "ublk_start_disk", 00:04:40.485 "ublk_destroy_target", 00:04:40.485 "ublk_create_target", 00:04:40.485 "virtio_blk_create_transport", 00:04:40.485 "virtio_blk_get_transports", 00:04:40.485 "vhost_controller_set_coalescing", 00:04:40.485 "vhost_get_controllers", 00:04:40.485 "vhost_delete_controller", 00:04:40.485 "vhost_create_blk_controller", 00:04:40.485 "vhost_scsi_controller_remove_target", 00:04:40.485 "vhost_scsi_controller_add_target", 00:04:40.485 "vhost_start_scsi_controller", 00:04:40.485 "vhost_create_scsi_controller", 00:04:40.485 "thread_set_cpumask", 00:04:40.485 "framework_get_scheduler", 00:04:40.485 "framework_set_scheduler", 00:04:40.485 "framework_get_reactors", 00:04:40.485 "thread_get_io_channels", 00:04:40.485 "thread_get_pollers", 00:04:40.485 "thread_get_stats", 00:04:40.485 "framework_monitor_context_switch", 00:04:40.485 "spdk_kill_instance", 00:04:40.485 "log_enable_timestamps", 00:04:40.485 "log_get_flags", 00:04:40.485 "log_clear_flag", 00:04:40.485 "log_set_flag", 00:04:40.485 "log_get_level", 00:04:40.485 "log_set_level", 00:04:40.485 "log_get_print_level", 00:04:40.485 "log_set_print_level", 00:04:40.485 "framework_enable_cpumask_locks", 00:04:40.485 "framework_disable_cpumask_locks", 00:04:40.485 "framework_wait_init", 00:04:40.485 "framework_start_init", 00:04:40.485 "scsi_get_devices", 00:04:40.485 "bdev_get_histogram", 00:04:40.485 "bdev_enable_histogram", 00:04:40.485 "bdev_set_qos_limit", 00:04:40.485 "bdev_set_qd_sampling_period", 00:04:40.485 "bdev_get_bdevs", 00:04:40.485 "bdev_reset_iostat", 00:04:40.485 "bdev_get_iostat", 00:04:40.485 "bdev_examine", 00:04:40.485 "bdev_wait_for_examine", 00:04:40.485 "bdev_set_options", 00:04:40.485 "notify_get_notifications", 00:04:40.485 "notify_get_types", 00:04:40.485 "accel_get_stats", 00:04:40.485 "accel_set_options", 00:04:40.485 "accel_set_driver", 00:04:40.485 "accel_crypto_key_destroy", 00:04:40.485 "accel_crypto_keys_get", 00:04:40.485 "accel_crypto_key_create", 00:04:40.485 "accel_assign_opc", 00:04:40.485 "accel_get_module_info", 00:04:40.485 "accel_get_opc_assignments", 00:04:40.485 "vmd_rescan", 00:04:40.485 "vmd_remove_device", 00:04:40.485 "vmd_enable", 00:04:40.485 "sock_set_default_impl", 00:04:40.485 "sock_impl_set_options", 00:04:40.485 "sock_impl_get_options", 00:04:40.485 "iobuf_get_stats", 00:04:40.485 "iobuf_set_options", 00:04:40.485 "framework_get_pci_devices", 00:04:40.485 "framework_get_config", 00:04:40.485 "framework_get_subsystems", 00:04:40.485 "trace_get_info", 00:04:40.485 "trace_get_tpoint_group_mask", 00:04:40.485 "trace_disable_tpoint_group", 00:04:40.485 "trace_enable_tpoint_group", 00:04:40.485 "trace_clear_tpoint_mask", 00:04:40.485 "trace_set_tpoint_mask", 00:04:40.485 "spdk_get_version", 00:04:40.485 "rpc_get_methods" 00:04:40.485 ] 00:04:40.485 06:41:54 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:40.485 06:41:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:40.485 06:41:54 -- common/autotest_common.sh@10 -- # set +x 00:04:40.485 06:41:54 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:40.485 06:41:54 -- spdkcli/tcp.sh@38 -- # killprocess 374966 00:04:40.485 06:41:54 -- common/autotest_common.sh@926 -- # '[' -z 374966 ']' 00:04:40.485 06:41:54 -- common/autotest_common.sh@930 -- # kill -0 374966 00:04:40.485 06:41:54 -- common/autotest_common.sh@931 -- # uname 00:04:40.485 06:41:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:40.485 06:41:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 374966 00:04:40.485 06:41:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:40.485 06:41:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:40.485 06:41:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 374966' 00:04:40.485 killing process with pid 374966 00:04:40.485 06:41:54 -- common/autotest_common.sh@945 -- # kill 374966 00:04:40.485 06:41:54 -- common/autotest_common.sh@950 -- # wait 374966 00:04:41.050 00:04:41.050 real 0m1.746s 00:04:41.050 user 0m3.330s 00:04:41.050 sys 0m0.471s 00:04:41.050 06:41:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.050 06:41:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.050 ************************************ 00:04:41.050 END TEST spdkcli_tcp 00:04:41.050 ************************************ 00:04:41.050 06:41:55 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.050 06:41:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.050 06:41:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.050 06:41:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.050 ************************************ 00:04:41.050 START TEST dpdk_mem_utility 00:04:41.050 ************************************ 00:04:41.050 06:41:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:41.050 * Looking for test storage... 00:04:41.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:41.050 06:41:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:41.050 06:41:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=375300 00:04:41.050 06:41:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.050 06:41:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 375300 00:04:41.050 06:41:55 -- common/autotest_common.sh@819 -- # '[' -z 375300 ']' 00:04:41.050 06:41:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.050 06:41:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:41.050 06:41:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.050 06:41:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:41.050 06:41:55 -- common/autotest_common.sh@10 -- # set +x 00:04:41.050 [2024-05-15 06:41:55.236272] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:41.050 [2024-05-15 06:41:55.236368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375300 ] 00:04:41.050 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.310 [2024-05-15 06:41:55.303590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.310 [2024-05-15 06:41:55.409143] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:41.310 [2024-05-15 06:41:55.409303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.243 06:41:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:42.243 06:41:56 -- common/autotest_common.sh@852 -- # return 0 00:04:42.243 06:41:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:42.243 06:41:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:42.243 06:41:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:42.243 06:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.243 { 00:04:42.243 "filename": "/tmp/spdk_mem_dump.txt" 00:04:42.243 } 00:04:42.243 06:41:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:42.243 06:41:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:42.243 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:42.243 1 heaps totaling size 814.000000 MiB 00:04:42.243 size: 814.000000 MiB heap id: 0 00:04:42.243 end heaps---------- 00:04:42.243 8 mempools totaling size 598.116089 MiB 00:04:42.243 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:42.244 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:42.244 size: 84.521057 MiB name: bdev_io_375300 00:04:42.244 size: 51.011292 MiB name: evtpool_375300 00:04:42.244 size: 50.003479 MiB name: msgpool_375300 00:04:42.244 size: 21.763794 MiB name: PDU_Pool 00:04:42.244 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:42.244 size: 0.026123 MiB name: Session_Pool 00:04:42.244 end mempools------- 00:04:42.244 6 memzones totaling size 4.142822 MiB 00:04:42.244 size: 1.000366 MiB name: RG_ring_0_375300 00:04:42.244 size: 1.000366 MiB name: RG_ring_1_375300 00:04:42.244 size: 1.000366 MiB name: RG_ring_4_375300 00:04:42.244 size: 1.000366 MiB name: RG_ring_5_375300 00:04:42.244 size: 0.125366 MiB name: RG_ring_2_375300 00:04:42.244 size: 0.015991 MiB name: RG_ring_3_375300 00:04:42.244 end memzones------- 00:04:42.244 06:41:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:42.244 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:42.244 list of free elements. size: 12.519348 MiB 00:04:42.244 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:42.244 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:42.244 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:42.244 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:42.244 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:42.244 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:42.244 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:42.244 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:42.244 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:42.244 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:42.244 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:42.244 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:42.244 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:42.244 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:42.244 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:42.244 list of standard malloc elements. size: 199.218079 MiB 00:04:42.244 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:42.244 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:42.244 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:42.244 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:42.244 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:42.244 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:42.244 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:42.244 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:42.244 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:42.244 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:42.244 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:42.244 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:42.244 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:42.244 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:42.244 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:42.244 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:42.244 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:42.244 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:42.244 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:42.244 list of memzone associated elements. size: 602.262573 MiB 00:04:42.244 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:42.244 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:42.244 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:42.244 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:42.244 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:42.244 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_375300_0 00:04:42.244 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:42.244 associated memzone info: size: 48.002930 MiB name: MP_evtpool_375300_0 00:04:42.244 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:42.244 associated memzone info: size: 48.002930 MiB name: MP_msgpool_375300_0 00:04:42.244 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:42.244 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:42.244 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:42.244 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:42.244 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:42.244 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_375300 00:04:42.244 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:42.244 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_375300 00:04:42.244 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:42.244 associated memzone info: size: 1.007996 MiB name: MP_evtpool_375300 00:04:42.244 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:42.244 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:42.244 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:42.244 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:42.244 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:42.244 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:42.244 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:42.244 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:42.244 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:42.244 associated memzone info: size: 1.000366 MiB name: RG_ring_0_375300 00:04:42.244 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:42.244 associated memzone info: size: 1.000366 MiB name: RG_ring_1_375300 00:04:42.244 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:42.244 associated memzone info: size: 1.000366 MiB name: RG_ring_4_375300 00:04:42.244 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:42.244 associated memzone info: size: 1.000366 MiB name: RG_ring_5_375300 00:04:42.244 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:42.244 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_375300 00:04:42.244 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:42.244 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:42.244 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:42.244 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:42.244 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:42.244 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:42.244 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:42.244 associated memzone info: size: 0.125366 MiB name: RG_ring_2_375300 00:04:42.244 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:42.244 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:42.244 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:42.244 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:42.244 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:42.244 associated memzone info: size: 0.015991 MiB name: RG_ring_3_375300 00:04:42.244 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:42.244 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:42.244 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:42.244 associated memzone info: size: 0.000183 MiB name: MP_msgpool_375300 00:04:42.244 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:42.244 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_375300 00:04:42.244 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:42.244 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:42.244 06:41:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:42.244 06:41:56 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 375300 00:04:42.244 06:41:56 -- common/autotest_common.sh@926 -- # '[' -z 375300 ']' 00:04:42.244 06:41:56 -- common/autotest_common.sh@930 -- # kill -0 375300 00:04:42.244 06:41:56 -- common/autotest_common.sh@931 -- # uname 00:04:42.244 06:41:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:42.245 06:41:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 375300 00:04:42.245 06:41:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:42.245 06:41:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:42.245 06:41:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 375300' 00:04:42.245 killing process with pid 375300 00:04:42.245 06:41:56 -- common/autotest_common.sh@945 -- # kill 375300 00:04:42.245 06:41:56 -- common/autotest_common.sh@950 -- # wait 375300 00:04:42.811 00:04:42.811 real 0m1.652s 00:04:42.811 user 0m1.813s 00:04:42.811 sys 0m0.439s 00:04:42.811 06:41:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.811 06:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.811 ************************************ 00:04:42.811 END TEST dpdk_mem_utility 00:04:42.811 ************************************ 00:04:42.811 06:41:56 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:42.811 06:41:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.811 06:41:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.811 06:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.811 ************************************ 00:04:42.811 START TEST event 00:04:42.811 ************************************ 00:04:42.811 06:41:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:42.811 * Looking for test storage... 00:04:42.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:42.811 06:41:56 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:42.811 06:41:56 -- bdev/nbd_common.sh@6 -- # set -e 00:04:42.811 06:41:56 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.811 06:41:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:42.811 06:41:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.811 06:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:42.811 ************************************ 00:04:42.811 START TEST event_perf 00:04:42.811 ************************************ 00:04:42.811 06:41:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:42.811 Running I/O for 1 seconds...[2024-05-15 06:41:56.888250] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:42.811 [2024-05-15 06:41:56.888335] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375497 ] 00:04:42.811 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.811 [2024-05-15 06:41:56.958511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.069 [2024-05-15 06:41:57.070513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.069 [2024-05-15 06:41:57.070570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.069 [2024-05-15 06:41:57.070635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.069 [2024-05-15 06:41:57.070638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.002 Running I/O for 1 seconds... 00:04:44.002 lcore 0: 236285 00:04:44.002 lcore 1: 236284 00:04:44.002 lcore 2: 236284 00:04:44.002 lcore 3: 236285 00:04:44.002 done. 00:04:44.002 00:04:44.002 real 0m1.326s 00:04:44.002 user 0m4.229s 00:04:44.002 sys 0m0.093s 00:04:44.002 06:41:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.002 06:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.002 ************************************ 00:04:44.002 END TEST event_perf 00:04:44.002 ************************************ 00:04:44.002 06:41:58 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:44.002 06:41:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:44.002 06:41:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.002 06:41:58 -- common/autotest_common.sh@10 -- # set +x 00:04:44.002 ************************************ 00:04:44.002 START TEST event_reactor 00:04:44.002 ************************************ 00:04:44.002 06:41:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:44.260 [2024-05-15 06:41:58.244573] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:44.260 [2024-05-15 06:41:58.244651] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375654 ] 00:04:44.260 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.261 [2024-05-15 06:41:58.319374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.261 [2024-05-15 06:41:58.444481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.634 test_start 00:04:45.634 oneshot 00:04:45.634 tick 100 00:04:45.634 tick 100 00:04:45.634 tick 250 00:04:45.634 tick 100 00:04:45.634 tick 100 00:04:45.634 tick 100 00:04:45.634 tick 250 00:04:45.634 tick 500 00:04:45.634 tick 100 00:04:45.634 tick 100 00:04:45.634 tick 250 00:04:45.634 tick 100 00:04:45.634 tick 100 00:04:45.634 test_end 00:04:45.634 00:04:45.634 real 0m1.331s 00:04:45.634 user 0m1.235s 00:04:45.634 sys 0m0.091s 00:04:45.634 06:41:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.634 06:41:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.634 ************************************ 00:04:45.634 END TEST event_reactor 00:04:45.634 ************************************ 00:04:45.634 06:41:59 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.634 06:41:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:45.634 06:41:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.634 06:41:59 -- common/autotest_common.sh@10 -- # set +x 00:04:45.634 ************************************ 00:04:45.634 START TEST event_reactor_perf 00:04:45.634 ************************************ 00:04:45.634 06:41:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:45.634 [2024-05-15 06:41:59.598245] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:45.634 [2024-05-15 06:41:59.598352] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid375942 ] 00:04:45.634 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.634 [2024-05-15 06:41:59.675863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.634 [2024-05-15 06:41:59.792946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.008 test_start 00:04:47.008 test_end 00:04:47.008 Performance: 358306 events per second 00:04:47.008 00:04:47.008 real 0m1.330s 00:04:47.008 user 0m1.230s 00:04:47.008 sys 0m0.093s 00:04:47.008 06:42:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.008 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:04:47.008 ************************************ 00:04:47.008 END TEST event_reactor_perf 00:04:47.008 ************************************ 00:04:47.008 06:42:00 -- event/event.sh@49 -- # uname -s 00:04:47.008 06:42:00 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:47.008 06:42:00 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:47.008 06:42:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.008 06:42:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.008 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:04:47.008 ************************************ 00:04:47.008 START TEST event_scheduler 00:04:47.008 ************************************ 00:04:47.008 06:42:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:47.008 * Looking for test storage... 00:04:47.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:47.008 06:42:00 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:47.008 06:42:00 -- scheduler/scheduler.sh@35 -- # scheduler_pid=376189 00:04:47.008 06:42:00 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:47.008 06:42:00 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.008 06:42:00 -- scheduler/scheduler.sh@37 -- # waitforlisten 376189 00:04:47.009 06:42:00 -- common/autotest_common.sh@819 -- # '[' -z 376189 ']' 00:04:47.009 06:42:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.009 06:42:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:47.009 06:42:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.009 06:42:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:47.009 06:42:00 -- common/autotest_common.sh@10 -- # set +x 00:04:47.009 [2024-05-15 06:42:01.027338] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:47.009 [2024-05-15 06:42:01.027441] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376189 ] 00:04:47.009 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.009 [2024-05-15 06:42:01.096909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.009 [2024-05-15 06:42:01.209798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.009 [2024-05-15 06:42:01.209862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.009 [2024-05-15 06:42:01.209938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.009 [2024-05-15 06:42:01.209938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.009 06:42:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:47.009 06:42:01 -- common/autotest_common.sh@852 -- # return 0 00:04:47.009 06:42:01 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:47.009 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.009 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.009 POWER: Env isn't set yet! 00:04:47.009 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:47.009 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:47.009 POWER: Cannot get available frequencies of lcore 0 00:04:47.009 POWER: Attempting to initialise PSTAT power management... 00:04:47.009 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:47.009 POWER: Initialized successfully for lcore 0 power management 00:04:47.281 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:47.281 POWER: Initialized successfully for lcore 1 power management 00:04:47.281 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:47.281 POWER: Initialized successfully for lcore 2 power management 00:04:47.281 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:47.281 POWER: Initialized successfully for lcore 3 power management 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 [2024-05-15 06:42:01.372495] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:47.281 06:42:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.281 06:42:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 ************************************ 00:04:47.281 START TEST scheduler_create_thread 00:04:47.281 ************************************ 00:04:47.281 06:42:01 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 2 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 3 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 4 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 5 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 6 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 7 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 8 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 9 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 10 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.281 06:42:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.281 06:42:01 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:47.281 06:42:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.281 06:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:47.854 06:42:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:47.854 06:42:02 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:47.854 06:42:02 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:47.854 06:42:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:47.854 06:42:02 -- common/autotest_common.sh@10 -- # set +x 00:04:49.226 06:42:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:49.226 00:04:49.226 real 0m1.755s 00:04:49.226 user 0m0.007s 00:04:49.226 sys 0m0.005s 00:04:49.226 06:42:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.226 06:42:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.226 ************************************ 00:04:49.226 END TEST scheduler_create_thread 00:04:49.226 ************************************ 00:04:49.226 06:42:03 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:49.226 06:42:03 -- scheduler/scheduler.sh@46 -- # killprocess 376189 00:04:49.226 06:42:03 -- common/autotest_common.sh@926 -- # '[' -z 376189 ']' 00:04:49.226 06:42:03 -- common/autotest_common.sh@930 -- # kill -0 376189 00:04:49.226 06:42:03 -- common/autotest_common.sh@931 -- # uname 00:04:49.226 06:42:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:49.226 06:42:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 376189 00:04:49.226 06:42:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:49.226 06:42:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:49.226 06:42:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 376189' 00:04:49.226 killing process with pid 376189 00:04:49.226 06:42:03 -- common/autotest_common.sh@945 -- # kill 376189 00:04:49.226 06:42:03 -- common/autotest_common.sh@950 -- # wait 376189 00:04:49.483 [2024-05-15 06:42:03.612377] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:49.742 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:49.742 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:49.742 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:49.742 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:49.742 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:49.742 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:49.742 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:49.742 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:49.742 00:04:49.742 real 0m2.935s 00:04:49.742 user 0m3.778s 00:04:49.742 sys 0m0.295s 00:04:49.742 06:42:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.742 06:42:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.742 ************************************ 00:04:49.742 END TEST event_scheduler 00:04:49.742 ************************************ 00:04:49.742 06:42:03 -- event/event.sh@51 -- # modprobe -n nbd 00:04:49.742 06:42:03 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:49.742 06:42:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.742 06:42:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.742 06:42:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.742 ************************************ 00:04:49.742 START TEST app_repeat 00:04:49.742 ************************************ 00:04:49.742 06:42:03 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:49.742 06:42:03 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.742 06:42:03 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.742 06:42:03 -- event/event.sh@13 -- # local nbd_list 00:04:49.742 06:42:03 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.742 06:42:03 -- event/event.sh@14 -- # local bdev_list 00:04:49.742 06:42:03 -- event/event.sh@15 -- # local repeat_times=4 00:04:49.742 06:42:03 -- event/event.sh@17 -- # modprobe nbd 00:04:49.742 06:42:03 -- event/event.sh@19 -- # repeat_pid=376590 00:04:49.742 06:42:03 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:49.742 06:42:03 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.742 06:42:03 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 376590' 00:04:49.742 Process app_repeat pid: 376590 00:04:49.742 06:42:03 -- event/event.sh@23 -- # for i in {0..2} 00:04:49.742 06:42:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:49.742 spdk_app_start Round 0 00:04:49.742 06:42:03 -- event/event.sh@25 -- # waitforlisten 376590 /var/tmp/spdk-nbd.sock 00:04:49.742 06:42:03 -- common/autotest_common.sh@819 -- # '[' -z 376590 ']' 00:04:49.742 06:42:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:49.742 06:42:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:49.742 06:42:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:49.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:49.742 06:42:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:49.742 06:42:03 -- common/autotest_common.sh@10 -- # set +x 00:04:49.742 [2024-05-15 06:42:03.933379] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:04:49.742 [2024-05-15 06:42:03.933445] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376590 ] 00:04:49.742 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.000 [2024-05-15 06:42:04.007388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.000 [2024-05-15 06:42:04.116917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.000 [2024-05-15 06:42:04.116921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.935 06:42:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:50.935 06:42:04 -- common/autotest_common.sh@852 -- # return 0 00:04:50.935 06:42:04 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.935 Malloc0 00:04:50.935 06:42:05 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.236 Malloc1 00:04:51.236 06:42:05 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@12 -- # local i 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.236 06:42:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.494 /dev/nbd0 00:04:51.494 06:42:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.494 06:42:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.494 06:42:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:51.494 06:42:05 -- common/autotest_common.sh@857 -- # local i 00:04:51.494 06:42:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:51.494 06:42:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:51.494 06:42:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:51.494 06:42:05 -- common/autotest_common.sh@861 -- # break 00:04:51.494 06:42:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:51.494 06:42:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:51.494 06:42:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.494 1+0 records in 00:04:51.494 1+0 records out 00:04:51.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202 s, 20.3 MB/s 00:04:51.494 06:42:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.494 06:42:05 -- common/autotest_common.sh@874 -- # size=4096 00:04:51.494 06:42:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.494 06:42:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:51.494 06:42:05 -- common/autotest_common.sh@877 -- # return 0 00:04:51.494 06:42:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.494 06:42:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.494 06:42:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.752 /dev/nbd1 00:04:51.752 06:42:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.752 06:42:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.752 06:42:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:51.752 06:42:05 -- common/autotest_common.sh@857 -- # local i 00:04:51.752 06:42:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:51.752 06:42:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:51.752 06:42:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:51.752 06:42:05 -- common/autotest_common.sh@861 -- # break 00:04:51.752 06:42:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:51.752 06:42:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:51.752 06:42:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.752 1+0 records in 00:04:51.752 1+0 records out 00:04:51.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237436 s, 17.3 MB/s 00:04:51.752 06:42:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.752 06:42:05 -- common/autotest_common.sh@874 -- # size=4096 00:04:51.752 06:42:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.752 06:42:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:51.752 06:42:05 -- common/autotest_common.sh@877 -- # return 0 00:04:51.752 06:42:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.752 06:42:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.752 06:42:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.752 06:42:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.752 06:42:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.010 { 00:04:52.010 "nbd_device": "/dev/nbd0", 00:04:52.010 "bdev_name": "Malloc0" 00:04:52.010 }, 00:04:52.010 { 00:04:52.010 "nbd_device": "/dev/nbd1", 00:04:52.010 "bdev_name": "Malloc1" 00:04:52.010 } 00:04:52.010 ]' 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.010 { 00:04:52.010 "nbd_device": "/dev/nbd0", 00:04:52.010 "bdev_name": "Malloc0" 00:04:52.010 }, 00:04:52.010 { 00:04:52.010 "nbd_device": "/dev/nbd1", 00:04:52.010 "bdev_name": "Malloc1" 00:04:52.010 } 00:04:52.010 ]' 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.010 /dev/nbd1' 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.010 /dev/nbd1' 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.010 256+0 records in 00:04:52.010 256+0 records out 00:04:52.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423739 s, 247 MB/s 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.010 256+0 records in 00:04:52.010 256+0 records out 00:04:52.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222776 s, 47.1 MB/s 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.010 256+0 records in 00:04:52.010 256+0 records out 00:04:52.010 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230402 s, 45.5 MB/s 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.010 06:42:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@51 -- # local i 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@41 -- # break 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.268 06:42:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.526 06:42:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.526 06:42:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.526 06:42:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.526 06:42:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.526 06:42:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.526 06:42:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.526 06:42:06 -- bdev/nbd_common.sh@41 -- # break 00:04:52.526 06:42:06 -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.526 06:42:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.526 06:42:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.526 06:42:06 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.783 06:42:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.783 06:42:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.783 06:42:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.783 06:42:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.783 06:42:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.783 06:42:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.783 06:42:07 -- bdev/nbd_common.sh@65 -- # true 00:04:52.783 06:42:07 -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.783 06:42:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.783 06:42:07 -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.783 06:42:07 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.783 06:42:07 -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.783 06:42:07 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.041 06:42:07 -- event/event.sh@35 -- # sleep 3 00:04:53.607 [2024-05-15 06:42:07.541564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.607 [2024-05-15 06:42:07.653351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.607 [2024-05-15 06:42:07.653352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.607 [2024-05-15 06:42:07.708500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.607 [2024-05-15 06:42:07.708562] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.135 06:42:10 -- event/event.sh@23 -- # for i in {0..2} 00:04:56.135 06:42:10 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:56.135 spdk_app_start Round 1 00:04:56.135 06:42:10 -- event/event.sh@25 -- # waitforlisten 376590 /var/tmp/spdk-nbd.sock 00:04:56.135 06:42:10 -- common/autotest_common.sh@819 -- # '[' -z 376590 ']' 00:04:56.135 06:42:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.135 06:42:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:56.135 06:42:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.135 06:42:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:56.135 06:42:10 -- common/autotest_common.sh@10 -- # set +x 00:04:56.392 06:42:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:56.392 06:42:10 -- common/autotest_common.sh@852 -- # return 0 00:04:56.392 06:42:10 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.650 Malloc0 00:04:56.650 06:42:10 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.908 Malloc1 00:04:56.908 06:42:11 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.908 06:42:11 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.908 06:42:11 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.908 06:42:11 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.908 06:42:11 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.908 06:42:11 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.909 06:42:11 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.909 06:42:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.909 06:42:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.909 06:42:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.909 06:42:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.909 06:42:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.909 06:42:11 -- bdev/nbd_common.sh@12 -- # local i 00:04:56.909 06:42:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.909 06:42:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.909 06:42:11 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.166 /dev/nbd0 00:04:57.166 06:42:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.166 06:42:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.166 06:42:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:57.166 06:42:11 -- common/autotest_common.sh@857 -- # local i 00:04:57.166 06:42:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:57.166 06:42:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:57.166 06:42:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:57.166 06:42:11 -- common/autotest_common.sh@861 -- # break 00:04:57.166 06:42:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:57.166 06:42:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:57.166 06:42:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.166 1+0 records in 00:04:57.166 1+0 records out 00:04:57.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180619 s, 22.7 MB/s 00:04:57.166 06:42:11 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.166 06:42:11 -- common/autotest_common.sh@874 -- # size=4096 00:04:57.166 06:42:11 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.166 06:42:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:57.166 06:42:11 -- common/autotest_common.sh@877 -- # return 0 00:04:57.166 06:42:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.166 06:42:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.166 06:42:11 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.424 /dev/nbd1 00:04:57.424 06:42:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.424 06:42:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.424 06:42:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:57.424 06:42:11 -- common/autotest_common.sh@857 -- # local i 00:04:57.424 06:42:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:57.424 06:42:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:57.424 06:42:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:57.424 06:42:11 -- common/autotest_common.sh@861 -- # break 00:04:57.424 06:42:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:57.424 06:42:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:57.424 06:42:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.424 1+0 records in 00:04:57.424 1+0 records out 00:04:57.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154477 s, 26.5 MB/s 00:04:57.424 06:42:11 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.424 06:42:11 -- common/autotest_common.sh@874 -- # size=4096 00:04:57.424 06:42:11 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.424 06:42:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:57.424 06:42:11 -- common/autotest_common.sh@877 -- # return 0 00:04:57.424 06:42:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.424 06:42:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.424 06:42:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.424 06:42:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.424 06:42:11 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.682 06:42:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.682 { 00:04:57.682 "nbd_device": "/dev/nbd0", 00:04:57.682 "bdev_name": "Malloc0" 00:04:57.682 }, 00:04:57.682 { 00:04:57.682 "nbd_device": "/dev/nbd1", 00:04:57.682 "bdev_name": "Malloc1" 00:04:57.682 } 00:04:57.682 ]' 00:04:57.682 06:42:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.682 { 00:04:57.682 "nbd_device": "/dev/nbd0", 00:04:57.682 "bdev_name": "Malloc0" 00:04:57.682 }, 00:04:57.682 { 00:04:57.682 "nbd_device": "/dev/nbd1", 00:04:57.682 "bdev_name": "Malloc1" 00:04:57.682 } 00:04:57.683 ]' 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.683 /dev/nbd1' 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.683 /dev/nbd1' 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.683 256+0 records in 00:04:57.683 256+0 records out 00:04:57.683 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469111 s, 224 MB/s 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.683 256+0 records in 00:04:57.683 256+0 records out 00:04:57.683 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021374 s, 49.1 MB/s 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.683 256+0 records in 00:04:57.683 256+0 records out 00:04:57.683 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252455 s, 41.5 MB/s 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@51 -- # local i 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.683 06:42:11 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.941 06:42:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.941 06:42:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.941 06:42:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.941 06:42:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.941 06:42:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.941 06:42:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.941 06:42:12 -- bdev/nbd_common.sh@41 -- # break 00:04:57.941 06:42:12 -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.941 06:42:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.941 06:42:12 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.199 06:42:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.199 06:42:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.199 06:42:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.199 06:42:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.199 06:42:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.199 06:42:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.199 06:42:12 -- bdev/nbd_common.sh@41 -- # break 00:04:58.199 06:42:12 -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.199 06:42:12 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.199 06:42:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.199 06:42:12 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@65 -- # true 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.457 06:42:12 -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.457 06:42:12 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:58.716 06:42:12 -- event/event.sh@35 -- # sleep 3 00:04:59.282 [2024-05-15 06:42:13.223383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.282 [2024-05-15 06:42:13.337971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.282 [2024-05-15 06:42:13.337971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.282 [2024-05-15 06:42:13.399912] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.282 [2024-05-15 06:42:13.400005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:01.808 06:42:15 -- event/event.sh@23 -- # for i in {0..2} 00:05:01.808 06:42:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:01.808 spdk_app_start Round 2 00:05:01.808 06:42:15 -- event/event.sh@25 -- # waitforlisten 376590 /var/tmp/spdk-nbd.sock 00:05:01.808 06:42:15 -- common/autotest_common.sh@819 -- # '[' -z 376590 ']' 00:05:01.808 06:42:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.808 06:42:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:01.808 06:42:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.808 06:42:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:01.808 06:42:15 -- common/autotest_common.sh@10 -- # set +x 00:05:02.065 06:42:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:02.065 06:42:16 -- common/autotest_common.sh@852 -- # return 0 00:05:02.065 06:42:16 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.323 Malloc0 00:05:02.323 06:42:16 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.581 Malloc1 00:05:02.581 06:42:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@12 -- # local i 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.581 06:42:16 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.838 /dev/nbd0 00:05:02.838 06:42:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.838 06:42:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.838 06:42:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:02.838 06:42:16 -- common/autotest_common.sh@857 -- # local i 00:05:02.838 06:42:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:02.838 06:42:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:02.838 06:42:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:02.838 06:42:16 -- common/autotest_common.sh@861 -- # break 00:05:02.838 06:42:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:02.838 06:42:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:02.838 06:42:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.838 1+0 records in 00:05:02.838 1+0 records out 00:05:02.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186157 s, 22.0 MB/s 00:05:02.838 06:42:16 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.838 06:42:16 -- common/autotest_common.sh@874 -- # size=4096 00:05:02.838 06:42:16 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.838 06:42:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:02.838 06:42:16 -- common/autotest_common.sh@877 -- # return 0 00:05:02.838 06:42:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.838 06:42:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.838 06:42:16 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.095 /dev/nbd1 00:05:03.095 06:42:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.095 06:42:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.095 06:42:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:03.095 06:42:17 -- common/autotest_common.sh@857 -- # local i 00:05:03.095 06:42:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:03.095 06:42:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:03.095 06:42:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:03.095 06:42:17 -- common/autotest_common.sh@861 -- # break 00:05:03.095 06:42:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:03.095 06:42:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:03.095 06:42:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.095 1+0 records in 00:05:03.095 1+0 records out 00:05:03.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211866 s, 19.3 MB/s 00:05:03.095 06:42:17 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.095 06:42:17 -- common/autotest_common.sh@874 -- # size=4096 00:05:03.095 06:42:17 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.095 06:42:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:03.095 06:42:17 -- common/autotest_common.sh@877 -- # return 0 00:05:03.095 06:42:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.096 06:42:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.096 06:42:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.096 06:42:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.096 06:42:17 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.353 06:42:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.353 { 00:05:03.353 "nbd_device": "/dev/nbd0", 00:05:03.353 "bdev_name": "Malloc0" 00:05:03.353 }, 00:05:03.353 { 00:05:03.354 "nbd_device": "/dev/nbd1", 00:05:03.354 "bdev_name": "Malloc1" 00:05:03.354 } 00:05:03.354 ]' 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.354 { 00:05:03.354 "nbd_device": "/dev/nbd0", 00:05:03.354 "bdev_name": "Malloc0" 00:05:03.354 }, 00:05:03.354 { 00:05:03.354 "nbd_device": "/dev/nbd1", 00:05:03.354 "bdev_name": "Malloc1" 00:05:03.354 } 00:05:03.354 ]' 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.354 /dev/nbd1' 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.354 /dev/nbd1' 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.354 256+0 records in 00:05:03.354 256+0 records out 00:05:03.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385822 s, 272 MB/s 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.354 256+0 records in 00:05:03.354 256+0 records out 00:05:03.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241268 s, 43.5 MB/s 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.354 256+0 records in 00:05:03.354 256+0 records out 00:05:03.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246016 s, 42.6 MB/s 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@51 -- # local i 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.354 06:42:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.611 06:42:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.611 06:42:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.611 06:42:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.611 06:42:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.611 06:42:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.611 06:42:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.611 06:42:17 -- bdev/nbd_common.sh@41 -- # break 00:05:03.611 06:42:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.611 06:42:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.611 06:42:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.869 06:42:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.869 06:42:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.869 06:42:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.869 06:42:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.869 06:42:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.869 06:42:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.869 06:42:18 -- bdev/nbd_common.sh@41 -- # break 00:05:03.869 06:42:18 -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.869 06:42:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.869 06:42:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.869 06:42:18 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@65 -- # true 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.126 06:42:18 -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.126 06:42:18 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.384 06:42:18 -- event/event.sh@35 -- # sleep 3 00:05:04.641 [2024-05-15 06:42:18.835691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.899 [2024-05-15 06:42:18.949964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.899 [2024-05-15 06:42:18.949969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.899 [2024-05-15 06:42:19.011714] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.899 [2024-05-15 06:42:19.011789] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.482 06:42:21 -- event/event.sh@38 -- # waitforlisten 376590 /var/tmp/spdk-nbd.sock 00:05:07.482 06:42:21 -- common/autotest_common.sh@819 -- # '[' -z 376590 ']' 00:05:07.482 06:42:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.482 06:42:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:07.482 06:42:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.482 06:42:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:07.482 06:42:21 -- common/autotest_common.sh@10 -- # set +x 00:05:07.740 06:42:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:07.740 06:42:21 -- common/autotest_common.sh@852 -- # return 0 00:05:07.740 06:42:21 -- event/event.sh@39 -- # killprocess 376590 00:05:07.740 06:42:21 -- common/autotest_common.sh@926 -- # '[' -z 376590 ']' 00:05:07.740 06:42:21 -- common/autotest_common.sh@930 -- # kill -0 376590 00:05:07.740 06:42:21 -- common/autotest_common.sh@931 -- # uname 00:05:07.740 06:42:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:07.740 06:42:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 376590 00:05:07.740 06:42:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:07.740 06:42:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:07.740 06:42:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 376590' 00:05:07.740 killing process with pid 376590 00:05:07.740 06:42:21 -- common/autotest_common.sh@945 -- # kill 376590 00:05:07.740 06:42:21 -- common/autotest_common.sh@950 -- # wait 376590 00:05:07.998 spdk_app_start is called in Round 0. 00:05:07.998 Shutdown signal received, stop current app iteration 00:05:07.998 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:07.998 spdk_app_start is called in Round 1. 00:05:07.998 Shutdown signal received, stop current app iteration 00:05:07.998 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:07.998 spdk_app_start is called in Round 2. 00:05:07.998 Shutdown signal received, stop current app iteration 00:05:07.998 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:05:07.998 spdk_app_start is called in Round 3. 00:05:07.998 Shutdown signal received, stop current app iteration 00:05:07.998 06:42:22 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:07.998 06:42:22 -- event/event.sh@42 -- # return 0 00:05:07.998 00:05:07.998 real 0m18.157s 00:05:07.998 user 0m39.566s 00:05:07.998 sys 0m3.225s 00:05:07.998 06:42:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.998 06:42:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.998 ************************************ 00:05:07.998 END TEST app_repeat 00:05:07.998 ************************************ 00:05:07.998 06:42:22 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:07.998 06:42:22 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:07.998 06:42:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.998 06:42:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.998 06:42:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.998 ************************************ 00:05:07.998 START TEST cpu_locks 00:05:07.998 ************************************ 00:05:07.998 06:42:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:07.998 * Looking for test storage... 00:05:07.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:07.998 06:42:22 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:07.998 06:42:22 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:07.998 06:42:22 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:07.998 06:42:22 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:07.998 06:42:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.998 06:42:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.998 06:42:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.998 ************************************ 00:05:07.998 START TEST default_locks 00:05:07.998 ************************************ 00:05:07.998 06:42:22 -- common/autotest_common.sh@1104 -- # default_locks 00:05:07.998 06:42:22 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=379613 00:05:07.998 06:42:22 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.998 06:42:22 -- event/cpu_locks.sh@47 -- # waitforlisten 379613 00:05:07.998 06:42:22 -- common/autotest_common.sh@819 -- # '[' -z 379613 ']' 00:05:07.998 06:42:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.998 06:42:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:07.998 06:42:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.998 06:42:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:07.998 06:42:22 -- common/autotest_common.sh@10 -- # set +x 00:05:07.998 [2024-05-15 06:42:22.195673] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:07.998 [2024-05-15 06:42:22.195754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid379613 ] 00:05:07.998 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.257 [2024-05-15 06:42:22.264206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.257 [2024-05-15 06:42:22.367895] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:08.257 [2024-05-15 06:42:22.368091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.191 06:42:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.191 06:42:23 -- common/autotest_common.sh@852 -- # return 0 00:05:09.191 06:42:23 -- event/cpu_locks.sh@49 -- # locks_exist 379613 00:05:09.191 06:42:23 -- event/cpu_locks.sh@22 -- # lslocks -p 379613 00:05:09.191 06:42:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.448 lslocks: write error 00:05:09.448 06:42:23 -- event/cpu_locks.sh@50 -- # killprocess 379613 00:05:09.448 06:42:23 -- common/autotest_common.sh@926 -- # '[' -z 379613 ']' 00:05:09.448 06:42:23 -- common/autotest_common.sh@930 -- # kill -0 379613 00:05:09.448 06:42:23 -- common/autotest_common.sh@931 -- # uname 00:05:09.448 06:42:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:09.448 06:42:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 379613 00:05:09.448 06:42:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:09.448 06:42:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:09.448 06:42:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 379613' 00:05:09.448 killing process with pid 379613 00:05:09.448 06:42:23 -- common/autotest_common.sh@945 -- # kill 379613 00:05:09.448 06:42:23 -- common/autotest_common.sh@950 -- # wait 379613 00:05:10.014 06:42:23 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 379613 00:05:10.014 06:42:23 -- common/autotest_common.sh@640 -- # local es=0 00:05:10.014 06:42:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 379613 00:05:10.014 06:42:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:10.014 06:42:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:10.014 06:42:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:10.014 06:42:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:10.014 06:42:23 -- common/autotest_common.sh@643 -- # waitforlisten 379613 00:05:10.014 06:42:23 -- common/autotest_common.sh@819 -- # '[' -z 379613 ']' 00:05:10.014 06:42:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.014 06:42:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:10.014 06:42:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.014 06:42:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:10.014 06:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (379613) - No such process 00:05:10.014 ERROR: process (pid: 379613) is no longer running 00:05:10.014 06:42:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:10.014 06:42:23 -- common/autotest_common.sh@852 -- # return 1 00:05:10.014 06:42:23 -- common/autotest_common.sh@643 -- # es=1 00:05:10.014 06:42:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:10.014 06:42:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:10.014 06:42:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:10.014 06:42:23 -- event/cpu_locks.sh@54 -- # no_locks 00:05:10.014 06:42:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.014 06:42:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.014 06:42:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.014 00:05:10.014 real 0m1.809s 00:05:10.014 user 0m1.928s 00:05:10.014 sys 0m0.542s 00:05:10.014 06:42:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.014 06:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.014 ************************************ 00:05:10.014 END TEST default_locks 00:05:10.014 ************************************ 00:05:10.014 06:42:23 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:10.014 06:42:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.014 06:42:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.014 06:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.014 ************************************ 00:05:10.014 START TEST default_locks_via_rpc 00:05:10.014 ************************************ 00:05:10.014 06:42:23 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:10.014 06:42:23 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=379791 00:05:10.014 06:42:23 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.014 06:42:23 -- event/cpu_locks.sh@63 -- # waitforlisten 379791 00:05:10.014 06:42:23 -- common/autotest_common.sh@819 -- # '[' -z 379791 ']' 00:05:10.014 06:42:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.014 06:42:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:10.014 06:42:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.014 06:42:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:10.014 06:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:10.014 [2024-05-15 06:42:24.033586] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:10.014 [2024-05-15 06:42:24.033694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid379791 ] 00:05:10.014 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.014 [2024-05-15 06:42:24.106228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.014 [2024-05-15 06:42:24.224952] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:10.014 [2024-05-15 06:42:24.225121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.947 06:42:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:10.947 06:42:24 -- common/autotest_common.sh@852 -- # return 0 00:05:10.947 06:42:24 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:10.947 06:42:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:10.947 06:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:10.947 06:42:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:10.947 06:42:24 -- event/cpu_locks.sh@67 -- # no_locks 00:05:10.947 06:42:24 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.947 06:42:24 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.947 06:42:24 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.947 06:42:24 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:10.947 06:42:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:10.947 06:42:24 -- common/autotest_common.sh@10 -- # set +x 00:05:10.947 06:42:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:10.947 06:42:24 -- event/cpu_locks.sh@71 -- # locks_exist 379791 00:05:10.947 06:42:24 -- event/cpu_locks.sh@22 -- # lslocks -p 379791 00:05:10.947 06:42:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.205 06:42:25 -- event/cpu_locks.sh@73 -- # killprocess 379791 00:05:11.205 06:42:25 -- common/autotest_common.sh@926 -- # '[' -z 379791 ']' 00:05:11.205 06:42:25 -- common/autotest_common.sh@930 -- # kill -0 379791 00:05:11.205 06:42:25 -- common/autotest_common.sh@931 -- # uname 00:05:11.205 06:42:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:11.205 06:42:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 379791 00:05:11.205 06:42:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:11.205 06:42:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:11.205 06:42:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 379791' 00:05:11.205 killing process with pid 379791 00:05:11.205 06:42:25 -- common/autotest_common.sh@945 -- # kill 379791 00:05:11.205 06:42:25 -- common/autotest_common.sh@950 -- # wait 379791 00:05:11.771 00:05:11.771 real 0m1.721s 00:05:11.771 user 0m1.830s 00:05:11.771 sys 0m0.565s 00:05:11.771 06:42:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.771 06:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.771 ************************************ 00:05:11.771 END TEST default_locks_via_rpc 00:05:11.771 ************************************ 00:05:11.771 06:42:25 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:11.771 06:42:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.771 06:42:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.771 06:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.771 ************************************ 00:05:11.771 START TEST non_locking_app_on_locked_coremask 00:05:11.771 ************************************ 00:05:11.771 06:42:25 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:11.771 06:42:25 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=380085 00:05:11.771 06:42:25 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.771 06:42:25 -- event/cpu_locks.sh@81 -- # waitforlisten 380085 /var/tmp/spdk.sock 00:05:11.771 06:42:25 -- common/autotest_common.sh@819 -- # '[' -z 380085 ']' 00:05:11.771 06:42:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.771 06:42:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:11.771 06:42:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.771 06:42:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:11.771 06:42:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.771 [2024-05-15 06:42:25.781081] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:11.771 [2024-05-15 06:42:25.781188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380085 ] 00:05:11.771 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.771 [2024-05-15 06:42:25.853284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.771 [2024-05-15 06:42:25.963970] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:11.771 [2024-05-15 06:42:25.964170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.705 06:42:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:12.705 06:42:26 -- common/autotest_common.sh@852 -- # return 0 00:05:12.705 06:42:26 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=380220 00:05:12.705 06:42:26 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:12.705 06:42:26 -- event/cpu_locks.sh@85 -- # waitforlisten 380220 /var/tmp/spdk2.sock 00:05:12.705 06:42:26 -- common/autotest_common.sh@819 -- # '[' -z 380220 ']' 00:05:12.705 06:42:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.705 06:42:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:12.705 06:42:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.705 06:42:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:12.705 06:42:26 -- common/autotest_common.sh@10 -- # set +x 00:05:12.705 [2024-05-15 06:42:26.738360] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:12.705 [2024-05-15 06:42:26.738452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380220 ] 00:05:12.705 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.705 [2024-05-15 06:42:26.846814] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.705 [2024-05-15 06:42:26.846857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.963 [2024-05-15 06:42:27.078758] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.963 [2024-05-15 06:42:27.078958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.530 06:42:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:13.530 06:42:27 -- common/autotest_common.sh@852 -- # return 0 00:05:13.530 06:42:27 -- event/cpu_locks.sh@87 -- # locks_exist 380085 00:05:13.530 06:42:27 -- event/cpu_locks.sh@22 -- # lslocks -p 380085 00:05:13.530 06:42:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.096 lslocks: write error 00:05:14.096 06:42:28 -- event/cpu_locks.sh@89 -- # killprocess 380085 00:05:14.096 06:42:28 -- common/autotest_common.sh@926 -- # '[' -z 380085 ']' 00:05:14.096 06:42:28 -- common/autotest_common.sh@930 -- # kill -0 380085 00:05:14.096 06:42:28 -- common/autotest_common.sh@931 -- # uname 00:05:14.096 06:42:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:14.096 06:42:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 380085 00:05:14.096 06:42:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:14.096 06:42:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:14.096 06:42:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 380085' 00:05:14.096 killing process with pid 380085 00:05:14.096 06:42:28 -- common/autotest_common.sh@945 -- # kill 380085 00:05:14.096 06:42:28 -- common/autotest_common.sh@950 -- # wait 380085 00:05:15.029 06:42:29 -- event/cpu_locks.sh@90 -- # killprocess 380220 00:05:15.029 06:42:29 -- common/autotest_common.sh@926 -- # '[' -z 380220 ']' 00:05:15.029 06:42:29 -- common/autotest_common.sh@930 -- # kill -0 380220 00:05:15.029 06:42:29 -- common/autotest_common.sh@931 -- # uname 00:05:15.030 06:42:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:15.030 06:42:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 380220 00:05:15.030 06:42:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:15.030 06:42:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:15.030 06:42:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 380220' 00:05:15.030 killing process with pid 380220 00:05:15.030 06:42:29 -- common/autotest_common.sh@945 -- # kill 380220 00:05:15.030 06:42:29 -- common/autotest_common.sh@950 -- # wait 380220 00:05:15.596 00:05:15.596 real 0m3.959s 00:05:15.596 user 0m4.219s 00:05:15.596 sys 0m1.141s 00:05:15.596 06:42:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.596 06:42:29 -- common/autotest_common.sh@10 -- # set +x 00:05:15.596 ************************************ 00:05:15.596 END TEST non_locking_app_on_locked_coremask 00:05:15.596 ************************************ 00:05:15.596 06:42:29 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:15.596 06:42:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.596 06:42:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.596 06:42:29 -- common/autotest_common.sh@10 -- # set +x 00:05:15.596 ************************************ 00:05:15.596 START TEST locking_app_on_unlocked_coremask 00:05:15.596 ************************************ 00:05:15.596 06:42:29 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:15.596 06:42:29 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=380542 00:05:15.596 06:42:29 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:15.596 06:42:29 -- event/cpu_locks.sh@99 -- # waitforlisten 380542 /var/tmp/spdk.sock 00:05:15.596 06:42:29 -- common/autotest_common.sh@819 -- # '[' -z 380542 ']' 00:05:15.596 06:42:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.596 06:42:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:15.596 06:42:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.596 06:42:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:15.596 06:42:29 -- common/autotest_common.sh@10 -- # set +x 00:05:15.596 [2024-05-15 06:42:29.768733] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:15.596 [2024-05-15 06:42:29.768809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380542 ] 00:05:15.596 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.854 [2024-05-15 06:42:29.836645] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.854 [2024-05-15 06:42:29.836682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.854 [2024-05-15 06:42:29.941174] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.854 [2024-05-15 06:42:29.941330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.788 06:42:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:16.788 06:42:30 -- common/autotest_common.sh@852 -- # return 0 00:05:16.788 06:42:30 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=380675 00:05:16.788 06:42:30 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:16.788 06:42:30 -- event/cpu_locks.sh@103 -- # waitforlisten 380675 /var/tmp/spdk2.sock 00:05:16.788 06:42:30 -- common/autotest_common.sh@819 -- # '[' -z 380675 ']' 00:05:16.788 06:42:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.788 06:42:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:16.788 06:42:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.788 06:42:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:16.788 06:42:30 -- common/autotest_common.sh@10 -- # set +x 00:05:16.788 [2024-05-15 06:42:30.747391] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:16.788 [2024-05-15 06:42:30.747457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid380675 ] 00:05:16.788 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.788 [2024-05-15 06:42:30.859019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.046 [2024-05-15 06:42:31.091412] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:17.046 [2024-05-15 06:42:31.091590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.611 06:42:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:17.611 06:42:31 -- common/autotest_common.sh@852 -- # return 0 00:05:17.611 06:42:31 -- event/cpu_locks.sh@105 -- # locks_exist 380675 00:05:17.611 06:42:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.611 06:42:31 -- event/cpu_locks.sh@22 -- # lslocks -p 380675 00:05:18.176 lslocks: write error 00:05:18.176 06:42:32 -- event/cpu_locks.sh@107 -- # killprocess 380542 00:05:18.176 06:42:32 -- common/autotest_common.sh@926 -- # '[' -z 380542 ']' 00:05:18.176 06:42:32 -- common/autotest_common.sh@930 -- # kill -0 380542 00:05:18.176 06:42:32 -- common/autotest_common.sh@931 -- # uname 00:05:18.176 06:42:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:18.176 06:42:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 380542 00:05:18.176 06:42:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:18.176 06:42:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:18.176 06:42:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 380542' 00:05:18.176 killing process with pid 380542 00:05:18.176 06:42:32 -- common/autotest_common.sh@945 -- # kill 380542 00:05:18.176 06:42:32 -- common/autotest_common.sh@950 -- # wait 380542 00:05:19.108 06:42:33 -- event/cpu_locks.sh@108 -- # killprocess 380675 00:05:19.108 06:42:33 -- common/autotest_common.sh@926 -- # '[' -z 380675 ']' 00:05:19.108 06:42:33 -- common/autotest_common.sh@930 -- # kill -0 380675 00:05:19.108 06:42:33 -- common/autotest_common.sh@931 -- # uname 00:05:19.108 06:42:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:19.108 06:42:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 380675 00:05:19.108 06:42:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:19.108 06:42:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:19.108 06:42:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 380675' 00:05:19.108 killing process with pid 380675 00:05:19.108 06:42:33 -- common/autotest_common.sh@945 -- # kill 380675 00:05:19.108 06:42:33 -- common/autotest_common.sh@950 -- # wait 380675 00:05:19.673 00:05:19.673 real 0m3.887s 00:05:19.673 user 0m4.182s 00:05:19.673 sys 0m1.134s 00:05:19.673 06:42:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.673 06:42:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.674 ************************************ 00:05:19.674 END TEST locking_app_on_unlocked_coremask 00:05:19.674 ************************************ 00:05:19.674 06:42:33 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:19.674 06:42:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.674 06:42:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.674 06:42:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.674 ************************************ 00:05:19.674 START TEST locking_app_on_locked_coremask 00:05:19.674 ************************************ 00:05:19.674 06:42:33 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:19.674 06:42:33 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=381114 00:05:19.674 06:42:33 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.674 06:42:33 -- event/cpu_locks.sh@116 -- # waitforlisten 381114 /var/tmp/spdk.sock 00:05:19.674 06:42:33 -- common/autotest_common.sh@819 -- # '[' -z 381114 ']' 00:05:19.674 06:42:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.674 06:42:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:19.674 06:42:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.674 06:42:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:19.674 06:42:33 -- common/autotest_common.sh@10 -- # set +x 00:05:19.674 [2024-05-15 06:42:33.686010] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:19.674 [2024-05-15 06:42:33.686111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381114 ] 00:05:19.674 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.674 [2024-05-15 06:42:33.759683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.674 [2024-05-15 06:42:33.872991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.674 [2024-05-15 06:42:33.873179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.635 06:42:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:20.635 06:42:34 -- common/autotest_common.sh@852 -- # return 0 00:05:20.635 06:42:34 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=381254 00:05:20.635 06:42:34 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:20.635 06:42:34 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 381254 /var/tmp/spdk2.sock 00:05:20.635 06:42:34 -- common/autotest_common.sh@640 -- # local es=0 00:05:20.635 06:42:34 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 381254 /var/tmp/spdk2.sock 00:05:20.635 06:42:34 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:20.635 06:42:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:20.635 06:42:34 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:20.635 06:42:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:20.635 06:42:34 -- common/autotest_common.sh@643 -- # waitforlisten 381254 /var/tmp/spdk2.sock 00:05:20.635 06:42:34 -- common/autotest_common.sh@819 -- # '[' -z 381254 ']' 00:05:20.635 06:42:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.635 06:42:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:20.635 06:42:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.635 06:42:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:20.635 06:42:34 -- common/autotest_common.sh@10 -- # set +x 00:05:20.635 [2024-05-15 06:42:34.642247] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:20.635 [2024-05-15 06:42:34.642342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381254 ] 00:05:20.635 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.635 [2024-05-15 06:42:34.754817] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 381114 has claimed it. 00:05:20.635 [2024-05-15 06:42:34.754883] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:21.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (381254) - No such process 00:05:21.200 ERROR: process (pid: 381254) is no longer running 00:05:21.200 06:42:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:21.200 06:42:35 -- common/autotest_common.sh@852 -- # return 1 00:05:21.200 06:42:35 -- common/autotest_common.sh@643 -- # es=1 00:05:21.200 06:42:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:21.200 06:42:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:21.200 06:42:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:21.200 06:42:35 -- event/cpu_locks.sh@122 -- # locks_exist 381114 00:05:21.200 06:42:35 -- event/cpu_locks.sh@22 -- # lslocks -p 381114 00:05:21.200 06:42:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.458 lslocks: write error 00:05:21.458 06:42:35 -- event/cpu_locks.sh@124 -- # killprocess 381114 00:05:21.458 06:42:35 -- common/autotest_common.sh@926 -- # '[' -z 381114 ']' 00:05:21.458 06:42:35 -- common/autotest_common.sh@930 -- # kill -0 381114 00:05:21.458 06:42:35 -- common/autotest_common.sh@931 -- # uname 00:05:21.458 06:42:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:21.458 06:42:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 381114 00:05:21.458 06:42:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:21.458 06:42:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:21.458 06:42:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 381114' 00:05:21.458 killing process with pid 381114 00:05:21.458 06:42:35 -- common/autotest_common.sh@945 -- # kill 381114 00:05:21.458 06:42:35 -- common/autotest_common.sh@950 -- # wait 381114 00:05:22.023 00:05:22.023 real 0m2.475s 00:05:22.023 user 0m2.761s 00:05:22.023 sys 0m0.664s 00:05:22.023 06:42:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.023 06:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.023 ************************************ 00:05:22.023 END TEST locking_app_on_locked_coremask 00:05:22.023 ************************************ 00:05:22.023 06:42:36 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:22.023 06:42:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.023 06:42:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.023 06:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.023 ************************************ 00:05:22.023 START TEST locking_overlapped_coremask 00:05:22.023 ************************************ 00:05:22.023 06:42:36 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:22.023 06:42:36 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=381428 00:05:22.023 06:42:36 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:22.023 06:42:36 -- event/cpu_locks.sh@133 -- # waitforlisten 381428 /var/tmp/spdk.sock 00:05:22.023 06:42:36 -- common/autotest_common.sh@819 -- # '[' -z 381428 ']' 00:05:22.023 06:42:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.023 06:42:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.023 06:42:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.023 06:42:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.023 06:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:22.023 [2024-05-15 06:42:36.186884] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:22.023 [2024-05-15 06:42:36.186984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381428 ] 00:05:22.023 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.282 [2024-05-15 06:42:36.261224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.282 [2024-05-15 06:42:36.382382] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.282 [2024-05-15 06:42:36.382561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.282 [2024-05-15 06:42:36.382586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.282 [2024-05-15 06:42:36.382589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.214 06:42:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:23.214 06:42:37 -- common/autotest_common.sh@852 -- # return 0 00:05:23.214 06:42:37 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=381570 00:05:23.214 06:42:37 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 381570 /var/tmp/spdk2.sock 00:05:23.214 06:42:37 -- common/autotest_common.sh@640 -- # local es=0 00:05:23.214 06:42:37 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 381570 /var/tmp/spdk2.sock 00:05:23.214 06:42:37 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:23.214 06:42:37 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:23.214 06:42:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:23.214 06:42:37 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:23.214 06:42:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:23.214 06:42:37 -- common/autotest_common.sh@643 -- # waitforlisten 381570 /var/tmp/spdk2.sock 00:05:23.214 06:42:37 -- common/autotest_common.sh@819 -- # '[' -z 381570 ']' 00:05:23.214 06:42:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.214 06:42:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:23.214 06:42:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.214 06:42:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:23.214 06:42:37 -- common/autotest_common.sh@10 -- # set +x 00:05:23.214 [2024-05-15 06:42:37.182541] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:23.214 [2024-05-15 06:42:37.182633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381570 ] 00:05:23.215 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.215 [2024-05-15 06:42:37.284903] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 381428 has claimed it. 00:05:23.215 [2024-05-15 06:42:37.284980] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:23.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (381570) - No such process 00:05:23.779 ERROR: process (pid: 381570) is no longer running 00:05:23.779 06:42:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:23.779 06:42:37 -- common/autotest_common.sh@852 -- # return 1 00:05:23.779 06:42:37 -- common/autotest_common.sh@643 -- # es=1 00:05:23.779 06:42:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:23.779 06:42:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:23.779 06:42:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:23.779 06:42:37 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:23.779 06:42:37 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:23.779 06:42:37 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:23.779 06:42:37 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:23.779 06:42:37 -- event/cpu_locks.sh@141 -- # killprocess 381428 00:05:23.779 06:42:37 -- common/autotest_common.sh@926 -- # '[' -z 381428 ']' 00:05:23.779 06:42:37 -- common/autotest_common.sh@930 -- # kill -0 381428 00:05:23.779 06:42:37 -- common/autotest_common.sh@931 -- # uname 00:05:23.779 06:42:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:23.779 06:42:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 381428 00:05:23.779 06:42:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:23.779 06:42:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:23.779 06:42:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 381428' 00:05:23.779 killing process with pid 381428 00:05:23.779 06:42:37 -- common/autotest_common.sh@945 -- # kill 381428 00:05:23.779 06:42:37 -- common/autotest_common.sh@950 -- # wait 381428 00:05:24.344 00:05:24.344 real 0m2.216s 00:05:24.344 user 0m6.146s 00:05:24.344 sys 0m0.527s 00:05:24.344 06:42:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.344 06:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:24.344 ************************************ 00:05:24.344 END TEST locking_overlapped_coremask 00:05:24.344 ************************************ 00:05:24.344 06:42:38 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:24.344 06:42:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.344 06:42:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.344 06:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:24.344 ************************************ 00:05:24.344 START TEST locking_overlapped_coremask_via_rpc 00:05:24.344 ************************************ 00:05:24.344 06:42:38 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:24.344 06:42:38 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=381734 00:05:24.344 06:42:38 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:24.344 06:42:38 -- event/cpu_locks.sh@149 -- # waitforlisten 381734 /var/tmp/spdk.sock 00:05:24.344 06:42:38 -- common/autotest_common.sh@819 -- # '[' -z 381734 ']' 00:05:24.344 06:42:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.344 06:42:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.344 06:42:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.344 06:42:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.344 06:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:24.344 [2024-05-15 06:42:38.435695] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:24.344 [2024-05-15 06:42:38.435807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381734 ] 00:05:24.344 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.344 [2024-05-15 06:42:38.508920] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.344 [2024-05-15 06:42:38.508972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.602 [2024-05-15 06:42:38.622283] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.602 [2024-05-15 06:42:38.622533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.602 [2024-05-15 06:42:38.622606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.602 [2024-05-15 06:42:38.622610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.167 06:42:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:25.167 06:42:39 -- common/autotest_common.sh@852 -- # return 0 00:05:25.167 06:42:39 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=381878 00:05:25.167 06:42:39 -- event/cpu_locks.sh@153 -- # waitforlisten 381878 /var/tmp/spdk2.sock 00:05:25.167 06:42:39 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:25.167 06:42:39 -- common/autotest_common.sh@819 -- # '[' -z 381878 ']' 00:05:25.167 06:42:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.167 06:42:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:25.167 06:42:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.167 06:42:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:25.167 06:42:39 -- common/autotest_common.sh@10 -- # set +x 00:05:25.167 [2024-05-15 06:42:39.393970] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:25.167 [2024-05-15 06:42:39.394065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid381878 ] 00:05:25.424 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.424 [2024-05-15 06:42:39.494680] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.424 [2024-05-15 06:42:39.494720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.681 [2024-05-15 06:42:39.712867] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.681 [2024-05-15 06:42:39.717089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.681 [2024-05-15 06:42:39.717114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:25.681 [2024-05-15 06:42:39.717116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.246 06:42:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.246 06:42:40 -- common/autotest_common.sh@852 -- # return 0 00:05:26.246 06:42:40 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:26.246 06:42:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.246 06:42:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.246 06:42:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.246 06:42:40 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.246 06:42:40 -- common/autotest_common.sh@640 -- # local es=0 00:05:26.246 06:42:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.246 06:42:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:26.246 06:42:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:26.246 06:42:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:26.246 06:42:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:26.246 06:42:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.246 06:42:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.246 06:42:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.246 [2024-05-15 06:42:40.338026] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 381734 has claimed it. 00:05:26.246 request: 00:05:26.246 { 00:05:26.246 "method": "framework_enable_cpumask_locks", 00:05:26.246 "req_id": 1 00:05:26.246 } 00:05:26.246 Got JSON-RPC error response 00:05:26.246 response: 00:05:26.246 { 00:05:26.246 "code": -32603, 00:05:26.246 "message": "Failed to claim CPU core: 2" 00:05:26.246 } 00:05:26.246 06:42:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:26.246 06:42:40 -- common/autotest_common.sh@643 -- # es=1 00:05:26.246 06:42:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:26.246 06:42:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:26.246 06:42:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:26.246 06:42:40 -- event/cpu_locks.sh@158 -- # waitforlisten 381734 /var/tmp/spdk.sock 00:05:26.246 06:42:40 -- common/autotest_common.sh@819 -- # '[' -z 381734 ']' 00:05:26.246 06:42:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.246 06:42:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.246 06:42:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.247 06:42:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.247 06:42:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.504 06:42:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.504 06:42:40 -- common/autotest_common.sh@852 -- # return 0 00:05:26.504 06:42:40 -- event/cpu_locks.sh@159 -- # waitforlisten 381878 /var/tmp/spdk2.sock 00:05:26.504 06:42:40 -- common/autotest_common.sh@819 -- # '[' -z 381878 ']' 00:05:26.504 06:42:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.504 06:42:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.504 06:42:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.504 06:42:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.504 06:42:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.762 06:42:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.762 06:42:40 -- common/autotest_common.sh@852 -- # return 0 00:05:26.762 06:42:40 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:26.762 06:42:40 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:26.762 06:42:40 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:26.762 06:42:40 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:26.762 00:05:26.762 real 0m2.452s 00:05:26.762 user 0m1.149s 00:05:26.762 sys 0m0.225s 00:05:26.762 06:42:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.762 06:42:40 -- common/autotest_common.sh@10 -- # set +x 00:05:26.762 ************************************ 00:05:26.762 END TEST locking_overlapped_coremask_via_rpc 00:05:26.762 ************************************ 00:05:26.762 06:42:40 -- event/cpu_locks.sh@174 -- # cleanup 00:05:26.762 06:42:40 -- event/cpu_locks.sh@15 -- # [[ -z 381734 ]] 00:05:26.762 06:42:40 -- event/cpu_locks.sh@15 -- # killprocess 381734 00:05:26.762 06:42:40 -- common/autotest_common.sh@926 -- # '[' -z 381734 ']' 00:05:26.762 06:42:40 -- common/autotest_common.sh@930 -- # kill -0 381734 00:05:26.762 06:42:40 -- common/autotest_common.sh@931 -- # uname 00:05:26.762 06:42:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:26.762 06:42:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 381734 00:05:26.762 06:42:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:26.762 06:42:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:26.762 06:42:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 381734' 00:05:26.762 killing process with pid 381734 00:05:26.762 06:42:40 -- common/autotest_common.sh@945 -- # kill 381734 00:05:26.762 06:42:40 -- common/autotest_common.sh@950 -- # wait 381734 00:05:27.327 06:42:41 -- event/cpu_locks.sh@16 -- # [[ -z 381878 ]] 00:05:27.328 06:42:41 -- event/cpu_locks.sh@16 -- # killprocess 381878 00:05:27.328 06:42:41 -- common/autotest_common.sh@926 -- # '[' -z 381878 ']' 00:05:27.328 06:42:41 -- common/autotest_common.sh@930 -- # kill -0 381878 00:05:27.328 06:42:41 -- common/autotest_common.sh@931 -- # uname 00:05:27.328 06:42:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:27.328 06:42:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 381878 00:05:27.328 06:42:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:27.328 06:42:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:27.328 06:42:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 381878' 00:05:27.328 killing process with pid 381878 00:05:27.328 06:42:41 -- common/autotest_common.sh@945 -- # kill 381878 00:05:27.328 06:42:41 -- common/autotest_common.sh@950 -- # wait 381878 00:05:27.586 06:42:41 -- event/cpu_locks.sh@18 -- # rm -f 00:05:27.586 06:42:41 -- event/cpu_locks.sh@1 -- # cleanup 00:05:27.586 06:42:41 -- event/cpu_locks.sh@15 -- # [[ -z 381734 ]] 00:05:27.586 06:42:41 -- event/cpu_locks.sh@15 -- # killprocess 381734 00:05:27.586 06:42:41 -- common/autotest_common.sh@926 -- # '[' -z 381734 ']' 00:05:27.586 06:42:41 -- common/autotest_common.sh@930 -- # kill -0 381734 00:05:27.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (381734) - No such process 00:05:27.586 06:42:41 -- common/autotest_common.sh@953 -- # echo 'Process with pid 381734 is not found' 00:05:27.586 Process with pid 381734 is not found 00:05:27.586 06:42:41 -- event/cpu_locks.sh@16 -- # [[ -z 381878 ]] 00:05:27.586 06:42:41 -- event/cpu_locks.sh@16 -- # killprocess 381878 00:05:27.586 06:42:41 -- common/autotest_common.sh@926 -- # '[' -z 381878 ']' 00:05:27.586 06:42:41 -- common/autotest_common.sh@930 -- # kill -0 381878 00:05:27.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (381878) - No such process 00:05:27.586 06:42:41 -- common/autotest_common.sh@953 -- # echo 'Process with pid 381878 is not found' 00:05:27.586 Process with pid 381878 is not found 00:05:27.586 06:42:41 -- event/cpu_locks.sh@18 -- # rm -f 00:05:27.586 00:05:27.586 real 0m19.716s 00:05:27.586 user 0m34.358s 00:05:27.586 sys 0m5.627s 00:05:27.586 06:42:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.586 06:42:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.586 ************************************ 00:05:27.586 END TEST cpu_locks 00:05:27.586 ************************************ 00:05:27.844 00:05:27.844 real 0m45.018s 00:05:27.844 user 1m24.484s 00:05:27.844 sys 0m9.589s 00:05:27.844 06:42:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.844 06:42:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.844 ************************************ 00:05:27.844 END TEST event 00:05:27.845 ************************************ 00:05:27.845 06:42:41 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:27.845 06:42:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.845 06:42:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.845 06:42:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.845 ************************************ 00:05:27.845 START TEST thread 00:05:27.845 ************************************ 00:05:27.845 06:42:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:27.845 * Looking for test storage... 00:05:27.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:27.845 06:42:41 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:27.845 06:42:41 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:27.845 06:42:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.845 06:42:41 -- common/autotest_common.sh@10 -- # set +x 00:05:27.845 ************************************ 00:05:27.845 START TEST thread_poller_perf 00:05:27.845 ************************************ 00:05:27.845 06:42:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:27.845 [2024-05-15 06:42:41.930631] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:27.845 [2024-05-15 06:42:41.930716] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382250 ] 00:05:27.845 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.845 [2024-05-15 06:42:42.000632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.103 [2024-05-15 06:42:42.109208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.103 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:29.037 ====================================== 00:05:29.037 busy:2712692406 (cyc) 00:05:29.037 total_run_count: 280000 00:05:29.037 tsc_hz: 2700000000 (cyc) 00:05:29.037 ====================================== 00:05:29.037 poller_cost: 9688 (cyc), 3588 (nsec) 00:05:29.037 00:05:29.037 real 0m1.324s 00:05:29.037 user 0m1.233s 00:05:29.037 sys 0m0.084s 00:05:29.037 06:42:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.037 06:42:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.037 ************************************ 00:05:29.037 END TEST thread_poller_perf 00:05:29.037 ************************************ 00:05:29.037 06:42:43 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:29.037 06:42:43 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:29.037 06:42:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.037 06:42:43 -- common/autotest_common.sh@10 -- # set +x 00:05:29.037 ************************************ 00:05:29.037 START TEST thread_poller_perf 00:05:29.037 ************************************ 00:05:29.037 06:42:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:29.296 [2024-05-15 06:42:43.282503] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:29.296 [2024-05-15 06:42:43.282602] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382472 ] 00:05:29.296 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.296 [2024-05-15 06:42:43.359975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.296 [2024-05-15 06:42:43.476227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.296 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:30.670 ====================================== 00:05:30.670 busy:2703168781 (cyc) 00:05:30.670 total_run_count: 3857000 00:05:30.670 tsc_hz: 2700000000 (cyc) 00:05:30.670 ====================================== 00:05:30.670 poller_cost: 700 (cyc), 259 (nsec) 00:05:30.670 00:05:30.670 real 0m1.329s 00:05:30.670 user 0m1.228s 00:05:30.670 sys 0m0.093s 00:05:30.670 06:42:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.670 06:42:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.670 ************************************ 00:05:30.670 END TEST thread_poller_perf 00:05:30.670 ************************************ 00:05:30.670 06:42:44 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:30.670 00:05:30.670 real 0m2.754s 00:05:30.670 user 0m2.498s 00:05:30.670 sys 0m0.256s 00:05:30.670 06:42:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.670 06:42:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.670 ************************************ 00:05:30.670 END TEST thread 00:05:30.670 ************************************ 00:05:30.670 06:42:44 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:30.670 06:42:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.670 06:42:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.670 06:42:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.670 ************************************ 00:05:30.670 START TEST accel 00:05:30.670 ************************************ 00:05:30.670 06:42:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:30.670 * Looking for test storage... 00:05:30.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:30.670 06:42:44 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:30.670 06:42:44 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:30.670 06:42:44 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:30.670 06:42:44 -- accel/accel.sh@59 -- # spdk_tgt_pid=382725 00:05:30.671 06:42:44 -- accel/accel.sh@60 -- # waitforlisten 382725 00:05:30.671 06:42:44 -- common/autotest_common.sh@819 -- # '[' -z 382725 ']' 00:05:30.671 06:42:44 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:30.671 06:42:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.671 06:42:44 -- accel/accel.sh@58 -- # build_accel_config 00:05:30.671 06:42:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.671 06:42:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.671 06:42:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.671 06:42:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.671 06:42:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.671 06:42:44 -- common/autotest_common.sh@10 -- # set +x 00:05:30.671 06:42:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.671 06:42:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.671 06:42:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.671 06:42:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.671 06:42:44 -- accel/accel.sh@42 -- # jq -r . 00:05:30.671 [2024-05-15 06:42:44.734721] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:30.671 [2024-05-15 06:42:44.734815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382725 ] 00:05:30.671 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.671 [2024-05-15 06:42:44.805893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.929 [2024-05-15 06:42:44.918282] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.929 [2024-05-15 06:42:44.918433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.495 06:42:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:31.495 06:42:45 -- common/autotest_common.sh@852 -- # return 0 00:05:31.495 06:42:45 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:31.495 06:42:45 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:31.495 06:42:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:31.495 06:42:45 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:31.495 06:42:45 -- common/autotest_common.sh@10 -- # set +x 00:05:31.495 06:42:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # IFS== 00:05:31.495 06:42:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:31.495 06:42:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:31.495 06:42:45 -- accel/accel.sh@67 -- # killprocess 382725 00:05:31.495 06:42:45 -- common/autotest_common.sh@926 -- # '[' -z 382725 ']' 00:05:31.495 06:42:45 -- common/autotest_common.sh@930 -- # kill -0 382725 00:05:31.495 06:42:45 -- common/autotest_common.sh@931 -- # uname 00:05:31.495 06:42:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:31.495 06:42:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 382725 00:05:31.754 06:42:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:31.754 06:42:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:31.754 06:42:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 382725' 00:05:31.754 killing process with pid 382725 00:05:31.754 06:42:45 -- common/autotest_common.sh@945 -- # kill 382725 00:05:31.754 06:42:45 -- common/autotest_common.sh@950 -- # wait 382725 00:05:32.013 06:42:46 -- accel/accel.sh@68 -- # trap - ERR 00:05:32.013 06:42:46 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:32.013 06:42:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:32.013 06:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.013 06:42:46 -- common/autotest_common.sh@10 -- # set +x 00:05:32.013 06:42:46 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:32.013 06:42:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:32.013 06:42:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.013 06:42:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.013 06:42:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.013 06:42:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.013 06:42:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.013 06:42:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.013 06:42:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.013 06:42:46 -- accel/accel.sh@42 -- # jq -r . 00:05:32.272 06:42:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.272 06:42:46 -- common/autotest_common.sh@10 -- # set +x 00:05:32.272 06:42:46 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:32.272 06:42:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:32.272 06:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.272 06:42:46 -- common/autotest_common.sh@10 -- # set +x 00:05:32.272 ************************************ 00:05:32.272 START TEST accel_missing_filename 00:05:32.272 ************************************ 00:05:32.272 06:42:46 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:32.272 06:42:46 -- common/autotest_common.sh@640 -- # local es=0 00:05:32.272 06:42:46 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:32.272 06:42:46 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:32.272 06:42:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:32.272 06:42:46 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:32.272 06:42:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:32.272 06:42:46 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:32.272 06:42:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:32.272 06:42:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.272 06:42:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.272 06:42:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.272 06:42:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.272 06:42:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.272 06:42:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.272 06:42:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.272 06:42:46 -- accel/accel.sh@42 -- # jq -r . 00:05:32.272 [2024-05-15 06:42:46.289700] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:32.272 [2024-05-15 06:42:46.289767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid382903 ] 00:05:32.272 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.272 [2024-05-15 06:42:46.361939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.272 [2024-05-15 06:42:46.478752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.530 [2024-05-15 06:42:46.540563] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.530 [2024-05-15 06:42:46.622880] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:32.530 A filename is required. 00:05:32.530 06:42:46 -- common/autotest_common.sh@643 -- # es=234 00:05:32.530 06:42:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:32.530 06:42:46 -- common/autotest_common.sh@652 -- # es=106 00:05:32.530 06:42:46 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:32.530 06:42:46 -- common/autotest_common.sh@660 -- # es=1 00:05:32.530 06:42:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:32.530 00:05:32.530 real 0m0.469s 00:05:32.530 user 0m0.347s 00:05:32.530 sys 0m0.151s 00:05:32.530 06:42:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.530 06:42:46 -- common/autotest_common.sh@10 -- # set +x 00:05:32.530 ************************************ 00:05:32.530 END TEST accel_missing_filename 00:05:32.530 ************************************ 00:05:32.530 06:42:46 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:32.530 06:42:46 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:32.531 06:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.531 06:42:46 -- common/autotest_common.sh@10 -- # set +x 00:05:32.789 ************************************ 00:05:32.789 START TEST accel_compress_verify 00:05:32.789 ************************************ 00:05:32.789 06:42:46 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:32.789 06:42:46 -- common/autotest_common.sh@640 -- # local es=0 00:05:32.789 06:42:46 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:32.789 06:42:46 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:32.789 06:42:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:32.789 06:42:46 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:32.789 06:42:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:32.789 06:42:46 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:32.789 06:42:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:32.789 06:42:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.789 06:42:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.789 06:42:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.789 06:42:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.789 06:42:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.789 06:42:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.789 06:42:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.789 06:42:46 -- accel/accel.sh@42 -- # jq -r . 00:05:32.789 [2024-05-15 06:42:46.785743] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:32.789 [2024-05-15 06:42:46.785818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383043 ] 00:05:32.789 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.789 [2024-05-15 06:42:46.858606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.789 [2024-05-15 06:42:46.973954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.048 [2024-05-15 06:42:47.035884] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.048 [2024-05-15 06:42:47.124698] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:33.048 00:05:33.048 Compression does not support the verify option, aborting. 00:05:33.048 06:42:47 -- common/autotest_common.sh@643 -- # es=161 00:05:33.048 06:42:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:33.048 06:42:47 -- common/autotest_common.sh@652 -- # es=33 00:05:33.048 06:42:47 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:33.048 06:42:47 -- common/autotest_common.sh@660 -- # es=1 00:05:33.048 06:42:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:33.048 00:05:33.048 real 0m0.483s 00:05:33.048 user 0m0.357s 00:05:33.048 sys 0m0.158s 00:05:33.048 06:42:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.048 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.048 ************************************ 00:05:33.048 END TEST accel_compress_verify 00:05:33.048 ************************************ 00:05:33.048 06:42:47 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:33.048 06:42:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:33.048 06:42:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.048 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.048 ************************************ 00:05:33.048 START TEST accel_wrong_workload 00:05:33.048 ************************************ 00:05:33.049 06:42:47 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:33.049 06:42:47 -- common/autotest_common.sh@640 -- # local es=0 00:05:33.049 06:42:47 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:33.049 06:42:47 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:33.049 06:42:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:33.049 06:42:47 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:33.049 06:42:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:33.049 06:42:47 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:33.049 06:42:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:33.049 06:42:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.049 06:42:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.049 06:42:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.049 06:42:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.049 06:42:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.049 06:42:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.049 06:42:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.049 06:42:47 -- accel/accel.sh@42 -- # jq -r . 00:05:33.308 Unsupported workload type: foobar 00:05:33.308 [2024-05-15 06:42:47.292660] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:33.308 accel_perf options: 00:05:33.308 [-h help message] 00:05:33.308 [-q queue depth per core] 00:05:33.308 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:33.308 [-T number of threads per core 00:05:33.308 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:33.308 [-t time in seconds] 00:05:33.308 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:33.308 [ dif_verify, , dif_generate, dif_generate_copy 00:05:33.308 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:33.308 [-l for compress/decompress workloads, name of uncompressed input file 00:05:33.308 [-S for crc32c workload, use this seed value (default 0) 00:05:33.308 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:33.308 [-f for fill workload, use this BYTE value (default 255) 00:05:33.308 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:33.308 [-y verify result if this switch is on] 00:05:33.308 [-a tasks to allocate per core (default: same value as -q)] 00:05:33.308 Can be used to spread operations across a wider range of memory. 00:05:33.308 06:42:47 -- common/autotest_common.sh@643 -- # es=1 00:05:33.308 06:42:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:33.308 06:42:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:33.308 06:42:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:33.308 00:05:33.308 real 0m0.024s 00:05:33.308 user 0m0.015s 00:05:33.308 sys 0m0.008s 00:05:33.308 06:42:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.308 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.308 ************************************ 00:05:33.308 END TEST accel_wrong_workload 00:05:33.308 ************************************ 00:05:33.308 Error: writing output failed: Broken pipe 00:05:33.308 06:42:47 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:33.308 06:42:47 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:33.308 06:42:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.308 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.308 ************************************ 00:05:33.308 START TEST accel_negative_buffers 00:05:33.308 ************************************ 00:05:33.308 06:42:47 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:33.308 06:42:47 -- common/autotest_common.sh@640 -- # local es=0 00:05:33.308 06:42:47 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:33.308 06:42:47 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:33.308 06:42:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:33.308 06:42:47 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:33.308 06:42:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:33.308 06:42:47 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:33.308 06:42:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:33.308 06:42:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.308 06:42:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.308 06:42:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.308 06:42:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.308 06:42:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.308 06:42:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.308 06:42:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.308 06:42:47 -- accel/accel.sh@42 -- # jq -r . 00:05:33.308 -x option must be non-negative. 00:05:33.308 [2024-05-15 06:42:47.343309] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:33.308 accel_perf options: 00:05:33.308 [-h help message] 00:05:33.308 [-q queue depth per core] 00:05:33.308 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:33.308 [-T number of threads per core 00:05:33.308 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:33.308 [-t time in seconds] 00:05:33.308 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:33.308 [ dif_verify, , dif_generate, dif_generate_copy 00:05:33.308 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:33.308 [-l for compress/decompress workloads, name of uncompressed input file 00:05:33.308 [-S for crc32c workload, use this seed value (default 0) 00:05:33.308 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:33.308 [-f for fill workload, use this BYTE value (default 255) 00:05:33.308 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:33.308 [-y verify result if this switch is on] 00:05:33.308 [-a tasks to allocate per core (default: same value as -q)] 00:05:33.308 Can be used to spread operations across a wider range of memory. 00:05:33.308 06:42:47 -- common/autotest_common.sh@643 -- # es=1 00:05:33.308 06:42:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:33.308 06:42:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:33.308 06:42:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:33.308 00:05:33.308 real 0m0.023s 00:05:33.308 user 0m0.009s 00:05:33.308 sys 0m0.014s 00:05:33.308 06:42:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.308 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.308 ************************************ 00:05:33.308 END TEST accel_negative_buffers 00:05:33.308 ************************************ 00:05:33.308 Error: writing output failed: Broken pipe 00:05:33.308 06:42:47 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:33.308 06:42:47 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:33.308 06:42:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.308 06:42:47 -- common/autotest_common.sh@10 -- # set +x 00:05:33.308 ************************************ 00:05:33.308 START TEST accel_crc32c 00:05:33.308 ************************************ 00:05:33.308 06:42:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:33.308 06:42:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.308 06:42:47 -- accel/accel.sh@17 -- # local accel_module 00:05:33.308 06:42:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:33.308 06:42:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:33.308 06:42:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.308 06:42:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.308 06:42:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.308 06:42:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.308 06:42:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.308 06:42:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.308 06:42:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.308 06:42:47 -- accel/accel.sh@42 -- # jq -r . 00:05:33.308 [2024-05-15 06:42:47.388046] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:33.308 [2024-05-15 06:42:47.388108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383110 ] 00:05:33.308 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.308 [2024-05-15 06:42:47.463831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.567 [2024-05-15 06:42:47.581157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.968 06:42:48 -- accel/accel.sh@18 -- # out=' 00:05:34.968 SPDK Configuration: 00:05:34.968 Core mask: 0x1 00:05:34.968 00:05:34.968 Accel Perf Configuration: 00:05:34.968 Workload Type: crc32c 00:05:34.968 CRC-32C seed: 32 00:05:34.968 Transfer size: 4096 bytes 00:05:34.968 Vector count 1 00:05:34.968 Module: software 00:05:34.968 Queue depth: 32 00:05:34.968 Allocate depth: 32 00:05:34.968 # threads/core: 1 00:05:34.968 Run time: 1 seconds 00:05:34.968 Verify: Yes 00:05:34.968 00:05:34.968 Running for 1 seconds... 00:05:34.968 00:05:34.968 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:34.968 ------------------------------------------------------------------------------------ 00:05:34.968 0,0 405760/s 1585 MiB/s 0 0 00:05:34.968 ==================================================================================== 00:05:34.968 Total 405760/s 1585 MiB/s 0 0' 00:05:34.968 06:42:48 -- accel/accel.sh@20 -- # IFS=: 00:05:34.968 06:42:48 -- accel/accel.sh@20 -- # read -r var val 00:05:34.968 06:42:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:34.969 06:42:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:34.969 06:42:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.969 06:42:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.969 06:42:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.969 06:42:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.969 06:42:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.969 06:42:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.969 06:42:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.969 06:42:48 -- accel/accel.sh@42 -- # jq -r . 00:05:34.969 [2024-05-15 06:42:48.875120] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:34.969 [2024-05-15 06:42:48.875198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383259 ] 00:05:34.969 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.969 [2024-05-15 06:42:48.948353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.969 [2024-05-15 06:42:49.063653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val= 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val= 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val=0x1 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val= 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val= 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val=crc32c 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val=32 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val= 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val=software 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@23 -- # accel_module=software 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val=32 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val=32 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val=1 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val=Yes 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val= 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:34.969 06:42:49 -- accel/accel.sh@21 -- # val= 00:05:34.969 06:42:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # IFS=: 00:05:34.969 06:42:49 -- accel/accel.sh@20 -- # read -r var val 00:05:36.344 06:42:50 -- accel/accel.sh@21 -- # val= 00:05:36.344 06:42:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # IFS=: 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # read -r var val 00:05:36.344 06:42:50 -- accel/accel.sh@21 -- # val= 00:05:36.344 06:42:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # IFS=: 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # read -r var val 00:05:36.344 06:42:50 -- accel/accel.sh@21 -- # val= 00:05:36.344 06:42:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # IFS=: 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # read -r var val 00:05:36.344 06:42:50 -- accel/accel.sh@21 -- # val= 00:05:36.344 06:42:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # IFS=: 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # read -r var val 00:05:36.344 06:42:50 -- accel/accel.sh@21 -- # val= 00:05:36.344 06:42:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # IFS=: 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # read -r var val 00:05:36.344 06:42:50 -- accel/accel.sh@21 -- # val= 00:05:36.344 06:42:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # IFS=: 00:05:36.344 06:42:50 -- accel/accel.sh@20 -- # read -r var val 00:05:36.344 06:42:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:36.344 06:42:50 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:36.344 06:42:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.344 00:05:36.344 real 0m2.956s 00:05:36.344 user 0m2.650s 00:05:36.344 sys 0m0.299s 00:05:36.344 06:42:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.344 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.344 ************************************ 00:05:36.344 END TEST accel_crc32c 00:05:36.344 ************************************ 00:05:36.344 06:42:50 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:36.344 06:42:50 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:36.344 06:42:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.344 06:42:50 -- common/autotest_common.sh@10 -- # set +x 00:05:36.344 ************************************ 00:05:36.344 START TEST accel_crc32c_C2 00:05:36.344 ************************************ 00:05:36.344 06:42:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:36.344 06:42:50 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.344 06:42:50 -- accel/accel.sh@17 -- # local accel_module 00:05:36.344 06:42:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:36.344 06:42:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:36.344 06:42:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.344 06:42:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.344 06:42:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.344 06:42:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.344 06:42:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.344 06:42:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.344 06:42:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.344 06:42:50 -- accel/accel.sh@42 -- # jq -r . 00:05:36.344 [2024-05-15 06:42:50.368321] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:36.344 [2024-05-15 06:42:50.368396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383540 ] 00:05:36.344 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.344 [2024-05-15 06:42:50.444848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.344 [2024-05-15 06:42:50.562465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.719 06:42:51 -- accel/accel.sh@18 -- # out=' 00:05:37.719 SPDK Configuration: 00:05:37.719 Core mask: 0x1 00:05:37.719 00:05:37.720 Accel Perf Configuration: 00:05:37.720 Workload Type: crc32c 00:05:37.720 CRC-32C seed: 0 00:05:37.720 Transfer size: 4096 bytes 00:05:37.720 Vector count 2 00:05:37.720 Module: software 00:05:37.720 Queue depth: 32 00:05:37.720 Allocate depth: 32 00:05:37.720 # threads/core: 1 00:05:37.720 Run time: 1 seconds 00:05:37.720 Verify: Yes 00:05:37.720 00:05:37.720 Running for 1 seconds... 00:05:37.720 00:05:37.720 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:37.720 ------------------------------------------------------------------------------------ 00:05:37.720 0,0 320448/s 2503 MiB/s 0 0 00:05:37.720 ==================================================================================== 00:05:37.720 Total 320448/s 1251 MiB/s 0 0' 00:05:37.720 06:42:51 -- accel/accel.sh@20 -- # IFS=: 00:05:37.720 06:42:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:37.720 06:42:51 -- accel/accel.sh@20 -- # read -r var val 00:05:37.720 06:42:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:37.720 06:42:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.720 06:42:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:37.720 06:42:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.720 06:42:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.720 06:42:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:37.720 06:42:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:37.720 06:42:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:37.720 06:42:51 -- accel/accel.sh@42 -- # jq -r . 00:05:37.720 [2024-05-15 06:42:51.860284] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:37.720 [2024-05-15 06:42:51.860364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383681 ] 00:05:37.720 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.720 [2024-05-15 06:42:51.932243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.978 [2024-05-15 06:42:52.049383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val= 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val= 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val=0x1 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val= 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val= 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val=crc32c 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val=0 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val= 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val=software 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@23 -- # accel_module=software 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val=32 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val=32 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val=1 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val=Yes 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val= 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:37.978 06:42:52 -- accel/accel.sh@21 -- # val= 00:05:37.978 06:42:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # IFS=: 00:05:37.978 06:42:52 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 06:42:53 -- accel/accel.sh@21 -- # val= 00:05:39.352 06:42:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 06:42:53 -- accel/accel.sh@21 -- # val= 00:05:39.352 06:42:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 06:42:53 -- accel/accel.sh@21 -- # val= 00:05:39.352 06:42:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 06:42:53 -- accel/accel.sh@21 -- # val= 00:05:39.352 06:42:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 06:42:53 -- accel/accel.sh@21 -- # val= 00:05:39.352 06:42:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 06:42:53 -- accel/accel.sh@21 -- # val= 00:05:39.352 06:42:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 06:42:53 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 06:42:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:39.352 06:42:53 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:39.352 06:42:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.352 00:05:39.352 real 0m2.980s 00:05:39.352 user 0m2.662s 00:05:39.352 sys 0m0.309s 00:05:39.352 06:42:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.353 06:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.353 ************************************ 00:05:39.353 END TEST accel_crc32c_C2 00:05:39.353 ************************************ 00:05:39.353 06:42:53 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:39.353 06:42:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:39.353 06:42:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.353 06:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:39.353 ************************************ 00:05:39.353 START TEST accel_copy 00:05:39.353 ************************************ 00:05:39.353 06:42:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:39.353 06:42:53 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.353 06:42:53 -- accel/accel.sh@17 -- # local accel_module 00:05:39.353 06:42:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:39.353 06:42:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:39.353 06:42:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.353 06:42:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.353 06:42:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.353 06:42:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.353 06:42:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.353 06:42:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.353 06:42:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.353 06:42:53 -- accel/accel.sh@42 -- # jq -r . 00:05:39.353 [2024-05-15 06:42:53.370412] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:39.353 [2024-05-15 06:42:53.370477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid383840 ] 00:05:39.353 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.353 [2024-05-15 06:42:53.442133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.353 [2024-05-15 06:42:53.557360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.729 06:42:54 -- accel/accel.sh@18 -- # out=' 00:05:40.729 SPDK Configuration: 00:05:40.729 Core mask: 0x1 00:05:40.729 00:05:40.729 Accel Perf Configuration: 00:05:40.729 Workload Type: copy 00:05:40.729 Transfer size: 4096 bytes 00:05:40.729 Vector count 1 00:05:40.729 Module: software 00:05:40.729 Queue depth: 32 00:05:40.729 Allocate depth: 32 00:05:40.729 # threads/core: 1 00:05:40.729 Run time: 1 seconds 00:05:40.729 Verify: Yes 00:05:40.729 00:05:40.729 Running for 1 seconds... 00:05:40.729 00:05:40.729 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:40.729 ------------------------------------------------------------------------------------ 00:05:40.729 0,0 278464/s 1087 MiB/s 0 0 00:05:40.729 ==================================================================================== 00:05:40.729 Total 278464/s 1087 MiB/s 0 0' 00:05:40.729 06:42:54 -- accel/accel.sh@20 -- # IFS=: 00:05:40.729 06:42:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:40.729 06:42:54 -- accel/accel.sh@20 -- # read -r var val 00:05:40.729 06:42:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:40.729 06:42:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.729 06:42:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.729 06:42:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.729 06:42:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.729 06:42:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.729 06:42:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.729 06:42:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.729 06:42:54 -- accel/accel.sh@42 -- # jq -r . 00:05:40.729 [2024-05-15 06:42:54.839153] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:40.729 [2024-05-15 06:42:54.839241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384103 ] 00:05:40.729 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.729 [2024-05-15 06:42:54.911418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.988 [2024-05-15 06:42:55.026635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val= 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val= 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val=0x1 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val= 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val= 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val=copy 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val= 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val=software 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@23 -- # accel_module=software 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val=32 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val=32 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val=1 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val=Yes 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val= 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:40.988 06:42:55 -- accel/accel.sh@21 -- # val= 00:05:40.988 06:42:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # IFS=: 00:05:40.988 06:42:55 -- accel/accel.sh@20 -- # read -r var val 00:05:42.363 06:42:56 -- accel/accel.sh@21 -- # val= 00:05:42.363 06:42:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.363 06:42:56 -- accel/accel.sh@21 -- # val= 00:05:42.363 06:42:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.363 06:42:56 -- accel/accel.sh@21 -- # val= 00:05:42.363 06:42:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.363 06:42:56 -- accel/accel.sh@21 -- # val= 00:05:42.363 06:42:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.363 06:42:56 -- accel/accel.sh@21 -- # val= 00:05:42.363 06:42:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.363 06:42:56 -- accel/accel.sh@21 -- # val= 00:05:42.363 06:42:56 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # IFS=: 00:05:42.363 06:42:56 -- accel/accel.sh@20 -- # read -r var val 00:05:42.363 06:42:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:42.363 06:42:56 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:42.363 06:42:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.363 00:05:42.363 real 0m2.953s 00:05:42.363 user 0m2.635s 00:05:42.363 sys 0m0.310s 00:05:42.363 06:42:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.363 06:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.363 ************************************ 00:05:42.363 END TEST accel_copy 00:05:42.363 ************************************ 00:05:42.363 06:42:56 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.363 06:42:56 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:42.363 06:42:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.363 06:42:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.363 ************************************ 00:05:42.363 START TEST accel_fill 00:05:42.363 ************************************ 00:05:42.363 06:42:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.363 06:42:56 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.363 06:42:56 -- accel/accel.sh@17 -- # local accel_module 00:05:42.363 06:42:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.363 06:42:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.363 06:42:56 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.363 06:42:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.363 06:42:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.363 06:42:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.363 06:42:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.363 06:42:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.363 06:42:56 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.363 06:42:56 -- accel/accel.sh@42 -- # jq -r . 00:05:42.363 [2024-05-15 06:42:56.356470] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:42.363 [2024-05-15 06:42:56.356558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384268 ] 00:05:42.363 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.363 [2024-05-15 06:42:56.432963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.363 [2024-05-15 06:42:56.547189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.737 06:42:57 -- accel/accel.sh@18 -- # out=' 00:05:43.737 SPDK Configuration: 00:05:43.737 Core mask: 0x1 00:05:43.737 00:05:43.737 Accel Perf Configuration: 00:05:43.737 Workload Type: fill 00:05:43.737 Fill pattern: 0x80 00:05:43.737 Transfer size: 4096 bytes 00:05:43.737 Vector count 1 00:05:43.737 Module: software 00:05:43.737 Queue depth: 64 00:05:43.737 Allocate depth: 64 00:05:43.737 # threads/core: 1 00:05:43.737 Run time: 1 seconds 00:05:43.737 Verify: Yes 00:05:43.737 00:05:43.737 Running for 1 seconds... 00:05:43.737 00:05:43.737 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:43.737 ------------------------------------------------------------------------------------ 00:05:43.737 0,0 403648/s 1576 MiB/s 0 0 00:05:43.737 ==================================================================================== 00:05:43.737 Total 403648/s 1576 MiB/s 0 0' 00:05:43.737 06:42:57 -- accel/accel.sh@20 -- # IFS=: 00:05:43.737 06:42:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:43.737 06:42:57 -- accel/accel.sh@20 -- # read -r var val 00:05:43.737 06:42:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:43.737 06:42:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.737 06:42:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.737 06:42:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.737 06:42:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.737 06:42:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.737 06:42:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.737 06:42:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.737 06:42:57 -- accel/accel.sh@42 -- # jq -r . 00:05:43.737 [2024-05-15 06:42:57.850437] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:43.737 [2024-05-15 06:42:57.850518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384408 ] 00:05:43.737 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.737 [2024-05-15 06:42:57.922679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.995 [2024-05-15 06:42:58.047281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.995 06:42:58 -- accel/accel.sh@21 -- # val= 00:05:43.995 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.995 06:42:58 -- accel/accel.sh@21 -- # val= 00:05:43.995 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.995 06:42:58 -- accel/accel.sh@21 -- # val=0x1 00:05:43.995 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.995 06:42:58 -- accel/accel.sh@21 -- # val= 00:05:43.995 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.995 06:42:58 -- accel/accel.sh@21 -- # val= 00:05:43.995 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.995 06:42:58 -- accel/accel.sh@21 -- # val=fill 00:05:43.995 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.995 06:42:58 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.995 06:42:58 -- accel/accel.sh@21 -- # val=0x80 00:05:43.995 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.995 06:42:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:43.995 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.995 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.996 06:42:58 -- accel/accel.sh@21 -- # val= 00:05:43.996 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.996 06:42:58 -- accel/accel.sh@21 -- # val=software 00:05:43.996 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.996 06:42:58 -- accel/accel.sh@23 -- # accel_module=software 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.996 06:42:58 -- accel/accel.sh@21 -- # val=64 00:05:43.996 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.996 06:42:58 -- accel/accel.sh@21 -- # val=64 00:05:43.996 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.996 06:42:58 -- accel/accel.sh@21 -- # val=1 00:05:43.996 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.996 06:42:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:43.996 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.996 06:42:58 -- accel/accel.sh@21 -- # val=Yes 00:05:43.996 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.996 06:42:58 -- accel/accel.sh@21 -- # val= 00:05:43.996 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:43.996 06:42:58 -- accel/accel.sh@21 -- # val= 00:05:43.996 06:42:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # IFS=: 00:05:43.996 06:42:58 -- accel/accel.sh@20 -- # read -r var val 00:05:45.369 06:42:59 -- accel/accel.sh@21 -- # val= 00:05:45.369 06:42:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.369 06:42:59 -- accel/accel.sh@21 -- # val= 00:05:45.369 06:42:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.369 06:42:59 -- accel/accel.sh@21 -- # val= 00:05:45.369 06:42:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.369 06:42:59 -- accel/accel.sh@21 -- # val= 00:05:45.369 06:42:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.369 06:42:59 -- accel/accel.sh@21 -- # val= 00:05:45.369 06:42:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.369 06:42:59 -- accel/accel.sh@21 -- # val= 00:05:45.369 06:42:59 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # IFS=: 00:05:45.369 06:42:59 -- accel/accel.sh@20 -- # read -r var val 00:05:45.369 06:42:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:45.369 06:42:59 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:45.369 06:42:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.369 00:05:45.369 real 0m2.989s 00:05:45.369 user 0m2.658s 00:05:45.369 sys 0m0.322s 00:05:45.369 06:42:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.369 06:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:45.369 ************************************ 00:05:45.369 END TEST accel_fill 00:05:45.369 ************************************ 00:05:45.369 06:42:59 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:45.369 06:42:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:45.369 06:42:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.369 06:42:59 -- common/autotest_common.sh@10 -- # set +x 00:05:45.369 ************************************ 00:05:45.369 START TEST accel_copy_crc32c 00:05:45.369 ************************************ 00:05:45.369 06:42:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:45.369 06:42:59 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.369 06:42:59 -- accel/accel.sh@17 -- # local accel_module 00:05:45.369 06:42:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:45.369 06:42:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:45.369 06:42:59 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.369 06:42:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.369 06:42:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.369 06:42:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.369 06:42:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.369 06:42:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.369 06:42:59 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.369 06:42:59 -- accel/accel.sh@42 -- # jq -r . 00:05:45.369 [2024-05-15 06:42:59.370211] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:45.369 [2024-05-15 06:42:59.370301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384688 ] 00:05:45.369 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.369 [2024-05-15 06:42:59.439019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.369 [2024-05-15 06:42:59.558500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.742 06:43:00 -- accel/accel.sh@18 -- # out=' 00:05:46.742 SPDK Configuration: 00:05:46.742 Core mask: 0x1 00:05:46.742 00:05:46.742 Accel Perf Configuration: 00:05:46.742 Workload Type: copy_crc32c 00:05:46.742 CRC-32C seed: 0 00:05:46.742 Vector size: 4096 bytes 00:05:46.742 Transfer size: 4096 bytes 00:05:46.742 Vector count 1 00:05:46.742 Module: software 00:05:46.742 Queue depth: 32 00:05:46.742 Allocate depth: 32 00:05:46.742 # threads/core: 1 00:05:46.742 Run time: 1 seconds 00:05:46.742 Verify: Yes 00:05:46.742 00:05:46.742 Running for 1 seconds... 00:05:46.742 00:05:46.742 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:46.742 ------------------------------------------------------------------------------------ 00:05:46.742 0,0 215872/s 843 MiB/s 0 0 00:05:46.742 ==================================================================================== 00:05:46.742 Total 215872/s 843 MiB/s 0 0' 00:05:46.742 06:43:00 -- accel/accel.sh@20 -- # IFS=: 00:05:46.742 06:43:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:46.742 06:43:00 -- accel/accel.sh@20 -- # read -r var val 00:05:46.742 06:43:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:46.742 06:43:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.742 06:43:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.742 06:43:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.742 06:43:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.742 06:43:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.742 06:43:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.742 06:43:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.742 06:43:00 -- accel/accel.sh@42 -- # jq -r . 00:05:46.742 [2024-05-15 06:43:00.850423] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:46.742 [2024-05-15 06:43:00.850507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384837 ] 00:05:46.742 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.742 [2024-05-15 06:43:00.922817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.000 [2024-05-15 06:43:01.043712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.000 06:43:01 -- accel/accel.sh@21 -- # val= 00:05:47.000 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.000 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.000 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.000 06:43:01 -- accel/accel.sh@21 -- # val= 00:05:47.000 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.000 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.000 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.000 06:43:01 -- accel/accel.sh@21 -- # val=0x1 00:05:47.000 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.000 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.000 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.000 06:43:01 -- accel/accel.sh@21 -- # val= 00:05:47.000 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.000 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.000 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val= 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val=0 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val= 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val=software 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@23 -- # accel_module=software 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val=32 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val=32 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val=1 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val=Yes 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val= 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:47.001 06:43:01 -- accel/accel.sh@21 -- # val= 00:05:47.001 06:43:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # IFS=: 00:05:47.001 06:43:01 -- accel/accel.sh@20 -- # read -r var val 00:05:48.373 06:43:02 -- accel/accel.sh@21 -- # val= 00:05:48.374 06:43:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.374 06:43:02 -- accel/accel.sh@21 -- # val= 00:05:48.374 06:43:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.374 06:43:02 -- accel/accel.sh@21 -- # val= 00:05:48.374 06:43:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.374 06:43:02 -- accel/accel.sh@21 -- # val= 00:05:48.374 06:43:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.374 06:43:02 -- accel/accel.sh@21 -- # val= 00:05:48.374 06:43:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.374 06:43:02 -- accel/accel.sh@21 -- # val= 00:05:48.374 06:43:02 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # IFS=: 00:05:48.374 06:43:02 -- accel/accel.sh@20 -- # read -r var val 00:05:48.374 06:43:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:48.374 06:43:02 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:48.374 06:43:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.374 00:05:48.374 real 0m2.977s 00:05:48.374 user 0m2.672s 00:05:48.374 sys 0m0.297s 00:05:48.374 06:43:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.374 06:43:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.374 ************************************ 00:05:48.374 END TEST accel_copy_crc32c 00:05:48.374 ************************************ 00:05:48.374 06:43:02 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.374 06:43:02 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:48.374 06:43:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.374 06:43:02 -- common/autotest_common.sh@10 -- # set +x 00:05:48.374 ************************************ 00:05:48.374 START TEST accel_copy_crc32c_C2 00:05:48.374 ************************************ 00:05:48.374 06:43:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:48.374 06:43:02 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.374 06:43:02 -- accel/accel.sh@17 -- # local accel_module 00:05:48.374 06:43:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:48.374 06:43:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:48.374 06:43:02 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.374 06:43:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.374 06:43:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.374 06:43:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.374 06:43:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.374 06:43:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.374 06:43:02 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.374 06:43:02 -- accel/accel.sh@42 -- # jq -r . 00:05:48.374 [2024-05-15 06:43:02.376294] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:48.374 [2024-05-15 06:43:02.376387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid384996 ] 00:05:48.374 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.374 [2024-05-15 06:43:02.451834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.374 [2024-05-15 06:43:02.573531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.747 06:43:03 -- accel/accel.sh@18 -- # out=' 00:05:49.747 SPDK Configuration: 00:05:49.747 Core mask: 0x1 00:05:49.747 00:05:49.747 Accel Perf Configuration: 00:05:49.747 Workload Type: copy_crc32c 00:05:49.747 CRC-32C seed: 0 00:05:49.747 Vector size: 4096 bytes 00:05:49.747 Transfer size: 8192 bytes 00:05:49.747 Vector count 2 00:05:49.747 Module: software 00:05:49.747 Queue depth: 32 00:05:49.747 Allocate depth: 32 00:05:49.747 # threads/core: 1 00:05:49.747 Run time: 1 seconds 00:05:49.747 Verify: Yes 00:05:49.747 00:05:49.747 Running for 1 seconds... 00:05:49.747 00:05:49.747 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:49.747 ------------------------------------------------------------------------------------ 00:05:49.747 0,0 154368/s 1206 MiB/s 0 0 00:05:49.747 ==================================================================================== 00:05:49.747 Total 154368/s 603 MiB/s 0 0' 00:05:49.747 06:43:03 -- accel/accel.sh@20 -- # IFS=: 00:05:49.747 06:43:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:49.747 06:43:03 -- accel/accel.sh@20 -- # read -r var val 00:05:49.747 06:43:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:49.747 06:43:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.747 06:43:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.747 06:43:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.747 06:43:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.747 06:43:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.747 06:43:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.747 06:43:03 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.747 06:43:03 -- accel/accel.sh@42 -- # jq -r . 00:05:49.747 [2024-05-15 06:43:03.873767] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:49.747 [2024-05-15 06:43:03.873851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385255 ] 00:05:49.747 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.747 [2024-05-15 06:43:03.951187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.005 [2024-05-15 06:43:04.079238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val= 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val= 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val=0x1 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val= 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val= 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val=0 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val= 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val=software 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@23 -- # accel_module=software 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.005 06:43:04 -- accel/accel.sh@21 -- # val=32 00:05:50.005 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.005 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.006 06:43:04 -- accel/accel.sh@21 -- # val=32 00:05:50.006 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.006 06:43:04 -- accel/accel.sh@21 -- # val=1 00:05:50.006 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.006 06:43:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:50.006 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.006 06:43:04 -- accel/accel.sh@21 -- # val=Yes 00:05:50.006 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.006 06:43:04 -- accel/accel.sh@21 -- # val= 00:05:50.006 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:50.006 06:43:04 -- accel/accel.sh@21 -- # val= 00:05:50.006 06:43:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # IFS=: 00:05:50.006 06:43:04 -- accel/accel.sh@20 -- # read -r var val 00:05:51.417 06:43:05 -- accel/accel.sh@21 -- # val= 00:05:51.417 06:43:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.417 06:43:05 -- accel/accel.sh@21 -- # val= 00:05:51.417 06:43:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.417 06:43:05 -- accel/accel.sh@21 -- # val= 00:05:51.417 06:43:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.417 06:43:05 -- accel/accel.sh@21 -- # val= 00:05:51.417 06:43:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.417 06:43:05 -- accel/accel.sh@21 -- # val= 00:05:51.417 06:43:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.417 06:43:05 -- accel/accel.sh@21 -- # val= 00:05:51.417 06:43:05 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # IFS=: 00:05:51.417 06:43:05 -- accel/accel.sh@20 -- # read -r var val 00:05:51.417 06:43:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:51.417 06:43:05 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:51.417 06:43:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.417 00:05:51.417 real 0m2.995s 00:05:51.417 user 0m2.678s 00:05:51.417 sys 0m0.308s 00:05:51.417 06:43:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.417 06:43:05 -- common/autotest_common.sh@10 -- # set +x 00:05:51.417 ************************************ 00:05:51.417 END TEST accel_copy_crc32c_C2 00:05:51.417 ************************************ 00:05:51.417 06:43:05 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:51.417 06:43:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:51.417 06:43:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.417 06:43:05 -- common/autotest_common.sh@10 -- # set +x 00:05:51.417 ************************************ 00:05:51.417 START TEST accel_dualcast 00:05:51.417 ************************************ 00:05:51.417 06:43:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:51.417 06:43:05 -- accel/accel.sh@16 -- # local accel_opc 00:05:51.417 06:43:05 -- accel/accel.sh@17 -- # local accel_module 00:05:51.417 06:43:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:51.417 06:43:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:51.417 06:43:05 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.417 06:43:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.417 06:43:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.417 06:43:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.417 06:43:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.417 06:43:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.417 06:43:05 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.417 06:43:05 -- accel/accel.sh@42 -- # jq -r . 00:05:51.417 [2024-05-15 06:43:05.396948] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:51.417 [2024-05-15 06:43:05.397032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385426 ] 00:05:51.417 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.417 [2024-05-15 06:43:05.469182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.417 [2024-05-15 06:43:05.589802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.790 06:43:06 -- accel/accel.sh@18 -- # out=' 00:05:52.790 SPDK Configuration: 00:05:52.790 Core mask: 0x1 00:05:52.790 00:05:52.790 Accel Perf Configuration: 00:05:52.790 Workload Type: dualcast 00:05:52.790 Transfer size: 4096 bytes 00:05:52.790 Vector count 1 00:05:52.790 Module: software 00:05:52.790 Queue depth: 32 00:05:52.790 Allocate depth: 32 00:05:52.790 # threads/core: 1 00:05:52.790 Run time: 1 seconds 00:05:52.790 Verify: Yes 00:05:52.790 00:05:52.790 Running for 1 seconds... 00:05:52.790 00:05:52.790 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:52.790 ------------------------------------------------------------------------------------ 00:05:52.790 0,0 297568/s 1162 MiB/s 0 0 00:05:52.790 ==================================================================================== 00:05:52.790 Total 297568/s 1162 MiB/s 0 0' 00:05:52.790 06:43:06 -- accel/accel.sh@20 -- # IFS=: 00:05:52.790 06:43:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:52.790 06:43:06 -- accel/accel.sh@20 -- # read -r var val 00:05:52.790 06:43:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:52.790 06:43:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.790 06:43:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.790 06:43:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.790 06:43:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.790 06:43:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.790 06:43:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.790 06:43:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.790 06:43:06 -- accel/accel.sh@42 -- # jq -r . 00:05:52.790 [2024-05-15 06:43:06.892199] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:52.790 [2024-05-15 06:43:06.892281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385566 ] 00:05:52.790 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.790 [2024-05-15 06:43:06.964085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.049 [2024-05-15 06:43:07.087209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val= 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val= 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val=0x1 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val= 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val= 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val=dualcast 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val= 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val=software 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@23 -- # accel_module=software 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val=32 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val=32 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val=1 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val=Yes 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val= 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:53.049 06:43:07 -- accel/accel.sh@21 -- # val= 00:05:53.049 06:43:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # IFS=: 00:05:53.049 06:43:07 -- accel/accel.sh@20 -- # read -r var val 00:05:54.423 06:43:08 -- accel/accel.sh@21 -- # val= 00:05:54.423 06:43:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # IFS=: 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # read -r var val 00:05:54.423 06:43:08 -- accel/accel.sh@21 -- # val= 00:05:54.423 06:43:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # IFS=: 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # read -r var val 00:05:54.423 06:43:08 -- accel/accel.sh@21 -- # val= 00:05:54.423 06:43:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # IFS=: 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # read -r var val 00:05:54.423 06:43:08 -- accel/accel.sh@21 -- # val= 00:05:54.423 06:43:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # IFS=: 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # read -r var val 00:05:54.423 06:43:08 -- accel/accel.sh@21 -- # val= 00:05:54.423 06:43:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # IFS=: 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # read -r var val 00:05:54.423 06:43:08 -- accel/accel.sh@21 -- # val= 00:05:54.423 06:43:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # IFS=: 00:05:54.423 06:43:08 -- accel/accel.sh@20 -- # read -r var val 00:05:54.423 06:43:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:54.423 06:43:08 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:54.423 06:43:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.423 00:05:54.423 real 0m2.985s 00:05:54.423 user 0m2.678s 00:05:54.423 sys 0m0.298s 00:05:54.423 06:43:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.423 06:43:08 -- common/autotest_common.sh@10 -- # set +x 00:05:54.423 ************************************ 00:05:54.423 END TEST accel_dualcast 00:05:54.423 ************************************ 00:05:54.423 06:43:08 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:54.423 06:43:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:54.423 06:43:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.423 06:43:08 -- common/autotest_common.sh@10 -- # set +x 00:05:54.423 ************************************ 00:05:54.423 START TEST accel_compare 00:05:54.423 ************************************ 00:05:54.423 06:43:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:05:54.423 06:43:08 -- accel/accel.sh@16 -- # local accel_opc 00:05:54.423 06:43:08 -- accel/accel.sh@17 -- # local accel_module 00:05:54.423 06:43:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:54.423 06:43:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:54.423 06:43:08 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.423 06:43:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.423 06:43:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.423 06:43:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.423 06:43:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.423 06:43:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.423 06:43:08 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.423 06:43:08 -- accel/accel.sh@42 -- # jq -r . 00:05:54.423 [2024-05-15 06:43:08.409028] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:54.423 [2024-05-15 06:43:08.409110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385844 ] 00:05:54.423 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.423 [2024-05-15 06:43:08.487566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.423 [2024-05-15 06:43:08.607917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.796 06:43:09 -- accel/accel.sh@18 -- # out=' 00:05:55.796 SPDK Configuration: 00:05:55.796 Core mask: 0x1 00:05:55.796 00:05:55.796 Accel Perf Configuration: 00:05:55.796 Workload Type: compare 00:05:55.796 Transfer size: 4096 bytes 00:05:55.796 Vector count 1 00:05:55.796 Module: software 00:05:55.796 Queue depth: 32 00:05:55.796 Allocate depth: 32 00:05:55.796 # threads/core: 1 00:05:55.796 Run time: 1 seconds 00:05:55.796 Verify: Yes 00:05:55.796 00:05:55.796 Running for 1 seconds... 00:05:55.796 00:05:55.796 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:55.796 ------------------------------------------------------------------------------------ 00:05:55.796 0,0 398432/s 1556 MiB/s 0 0 00:05:55.796 ==================================================================================== 00:05:55.796 Total 398432/s 1556 MiB/s 0 0' 00:05:55.796 06:43:09 -- accel/accel.sh@20 -- # IFS=: 00:05:55.796 06:43:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:55.796 06:43:09 -- accel/accel.sh@20 -- # read -r var val 00:05:55.796 06:43:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:55.796 06:43:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.796 06:43:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.796 06:43:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.796 06:43:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.796 06:43:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.796 06:43:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.796 06:43:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.796 06:43:09 -- accel/accel.sh@42 -- # jq -r . 00:05:55.796 [2024-05-15 06:43:09.898432] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:55.796 [2024-05-15 06:43:09.898514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid385993 ] 00:05:55.796 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.796 [2024-05-15 06:43:09.973120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.055 [2024-05-15 06:43:10.099255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val= 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val= 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val=0x1 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val= 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val= 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val=compare 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val= 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val=software 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@23 -- # accel_module=software 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val=32 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val=32 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val=1 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val=Yes 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val= 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:56.055 06:43:10 -- accel/accel.sh@21 -- # val= 00:05:56.055 06:43:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # IFS=: 00:05:56.055 06:43:10 -- accel/accel.sh@20 -- # read -r var val 00:05:57.428 06:43:11 -- accel/accel.sh@21 -- # val= 00:05:57.428 06:43:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # IFS=: 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # read -r var val 00:05:57.428 06:43:11 -- accel/accel.sh@21 -- # val= 00:05:57.428 06:43:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # IFS=: 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # read -r var val 00:05:57.428 06:43:11 -- accel/accel.sh@21 -- # val= 00:05:57.428 06:43:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # IFS=: 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # read -r var val 00:05:57.428 06:43:11 -- accel/accel.sh@21 -- # val= 00:05:57.428 06:43:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # IFS=: 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # read -r var val 00:05:57.428 06:43:11 -- accel/accel.sh@21 -- # val= 00:05:57.428 06:43:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # IFS=: 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # read -r var val 00:05:57.428 06:43:11 -- accel/accel.sh@21 -- # val= 00:05:57.428 06:43:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # IFS=: 00:05:57.428 06:43:11 -- accel/accel.sh@20 -- # read -r var val 00:05:57.428 06:43:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:57.428 06:43:11 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:57.428 06:43:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.428 00:05:57.428 real 0m2.993s 00:05:57.428 user 0m2.685s 00:05:57.428 sys 0m0.299s 00:05:57.428 06:43:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.428 06:43:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.428 ************************************ 00:05:57.428 END TEST accel_compare 00:05:57.428 ************************************ 00:05:57.428 06:43:11 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:57.428 06:43:11 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:57.428 06:43:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.428 06:43:11 -- common/autotest_common.sh@10 -- # set +x 00:05:57.428 ************************************ 00:05:57.428 START TEST accel_xor 00:05:57.428 ************************************ 00:05:57.428 06:43:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:05:57.428 06:43:11 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.428 06:43:11 -- accel/accel.sh@17 -- # local accel_module 00:05:57.428 06:43:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:57.428 06:43:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:57.428 06:43:11 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.429 06:43:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.429 06:43:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.429 06:43:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.429 06:43:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.429 06:43:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.429 06:43:11 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.429 06:43:11 -- accel/accel.sh@42 -- # jq -r . 00:05:57.429 [2024-05-15 06:43:11.428955] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:57.429 [2024-05-15 06:43:11.429037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386153 ] 00:05:57.429 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.429 [2024-05-15 06:43:11.501958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.429 [2024-05-15 06:43:11.625507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.797 06:43:12 -- accel/accel.sh@18 -- # out=' 00:05:58.797 SPDK Configuration: 00:05:58.797 Core mask: 0x1 00:05:58.797 00:05:58.797 Accel Perf Configuration: 00:05:58.797 Workload Type: xor 00:05:58.797 Source buffers: 2 00:05:58.797 Transfer size: 4096 bytes 00:05:58.797 Vector count 1 00:05:58.797 Module: software 00:05:58.797 Queue depth: 32 00:05:58.797 Allocate depth: 32 00:05:58.797 # threads/core: 1 00:05:58.797 Run time: 1 seconds 00:05:58.797 Verify: Yes 00:05:58.797 00:05:58.797 Running for 1 seconds... 00:05:58.797 00:05:58.797 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:58.797 ------------------------------------------------------------------------------------ 00:05:58.797 0,0 193024/s 754 MiB/s 0 0 00:05:58.797 ==================================================================================== 00:05:58.797 Total 193024/s 754 MiB/s 0 0' 00:05:58.797 06:43:12 -- accel/accel.sh@20 -- # IFS=: 00:05:58.797 06:43:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:58.797 06:43:12 -- accel/accel.sh@20 -- # read -r var val 00:05:58.797 06:43:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:58.797 06:43:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.797 06:43:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.797 06:43:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.797 06:43:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.797 06:43:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.797 06:43:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.797 06:43:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.797 06:43:12 -- accel/accel.sh@42 -- # jq -r . 00:05:58.797 [2024-05-15 06:43:12.920161] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:58.798 [2024-05-15 06:43:12.920243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386412 ] 00:05:58.798 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.798 [2024-05-15 06:43:12.997078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.063 [2024-05-15 06:43:13.117946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.063 06:43:13 -- accel/accel.sh@21 -- # val= 00:05:59.063 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.063 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.063 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.063 06:43:13 -- accel/accel.sh@21 -- # val= 00:05:59.063 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.063 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.063 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.063 06:43:13 -- accel/accel.sh@21 -- # val=0x1 00:05:59.063 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.063 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.063 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.063 06:43:13 -- accel/accel.sh@21 -- # val= 00:05:59.063 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val= 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val=xor 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val=2 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val= 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val=software 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val=32 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val=32 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val=1 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val=Yes 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val= 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:05:59.064 06:43:13 -- accel/accel.sh@21 -- # val= 00:05:59.064 06:43:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # IFS=: 00:05:59.064 06:43:13 -- accel/accel.sh@20 -- # read -r var val 00:06:00.438 06:43:14 -- accel/accel.sh@21 -- # val= 00:06:00.438 06:43:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.438 06:43:14 -- accel/accel.sh@21 -- # val= 00:06:00.438 06:43:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.438 06:43:14 -- accel/accel.sh@21 -- # val= 00:06:00.438 06:43:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.438 06:43:14 -- accel/accel.sh@21 -- # val= 00:06:00.438 06:43:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.438 06:43:14 -- accel/accel.sh@21 -- # val= 00:06:00.438 06:43:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.438 06:43:14 -- accel/accel.sh@21 -- # val= 00:06:00.438 06:43:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # IFS=: 00:06:00.438 06:43:14 -- accel/accel.sh@20 -- # read -r var val 00:06:00.438 06:43:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.438 06:43:14 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:00.438 06:43:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.438 00:06:00.438 real 0m2.983s 00:06:00.438 user 0m2.666s 00:06:00.438 sys 0m0.308s 00:06:00.438 06:43:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.438 06:43:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.438 ************************************ 00:06:00.438 END TEST accel_xor 00:06:00.438 ************************************ 00:06:00.438 06:43:14 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:00.438 06:43:14 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:00.438 06:43:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.438 06:43:14 -- common/autotest_common.sh@10 -- # set +x 00:06:00.438 ************************************ 00:06:00.438 START TEST accel_xor 00:06:00.438 ************************************ 00:06:00.438 06:43:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:00.438 06:43:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.438 06:43:14 -- accel/accel.sh@17 -- # local accel_module 00:06:00.438 06:43:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:00.438 06:43:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:00.438 06:43:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.438 06:43:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.438 06:43:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.438 06:43:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.438 06:43:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.438 06:43:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.438 06:43:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.438 06:43:14 -- accel/accel.sh@42 -- # jq -r . 00:06:00.438 [2024-05-15 06:43:14.440617] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:00.438 [2024-05-15 06:43:14.440695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386580 ] 00:06:00.438 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.438 [2024-05-15 06:43:14.512870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.438 [2024-05-15 06:43:14.632439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.811 06:43:15 -- accel/accel.sh@18 -- # out=' 00:06:01.811 SPDK Configuration: 00:06:01.811 Core mask: 0x1 00:06:01.811 00:06:01.811 Accel Perf Configuration: 00:06:01.811 Workload Type: xor 00:06:01.811 Source buffers: 3 00:06:01.811 Transfer size: 4096 bytes 00:06:01.811 Vector count 1 00:06:01.811 Module: software 00:06:01.811 Queue depth: 32 00:06:01.811 Allocate depth: 32 00:06:01.811 # threads/core: 1 00:06:01.811 Run time: 1 seconds 00:06:01.811 Verify: Yes 00:06:01.811 00:06:01.811 Running for 1 seconds... 00:06:01.811 00:06:01.811 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:01.811 ------------------------------------------------------------------------------------ 00:06:01.811 0,0 183808/s 718 MiB/s 0 0 00:06:01.811 ==================================================================================== 00:06:01.812 Total 183808/s 718 MiB/s 0 0' 00:06:01.812 06:43:15 -- accel/accel.sh@20 -- # IFS=: 00:06:01.812 06:43:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:01.812 06:43:15 -- accel/accel.sh@20 -- # read -r var val 00:06:01.812 06:43:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:01.812 06:43:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.812 06:43:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.812 06:43:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.812 06:43:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.812 06:43:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.812 06:43:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.812 06:43:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.812 06:43:15 -- accel/accel.sh@42 -- # jq -r . 00:06:01.812 [2024-05-15 06:43:15.936019] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:01.812 [2024-05-15 06:43:15.936102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386721 ] 00:06:01.812 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.812 [2024-05-15 06:43:16.009154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.070 [2024-05-15 06:43:16.130083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val= 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val= 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val=0x1 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val= 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val= 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val=xor 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val=3 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val= 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val=software 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val=32 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val=32 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val=1 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val=Yes 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val= 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:02.070 06:43:16 -- accel/accel.sh@21 -- # val= 00:06:02.070 06:43:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # IFS=: 00:06:02.070 06:43:16 -- accel/accel.sh@20 -- # read -r var val 00:06:03.445 06:43:17 -- accel/accel.sh@21 -- # val= 00:06:03.445 06:43:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.445 06:43:17 -- accel/accel.sh@21 -- # val= 00:06:03.445 06:43:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.445 06:43:17 -- accel/accel.sh@21 -- # val= 00:06:03.445 06:43:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.445 06:43:17 -- accel/accel.sh@21 -- # val= 00:06:03.445 06:43:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.445 06:43:17 -- accel/accel.sh@21 -- # val= 00:06:03.445 06:43:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.445 06:43:17 -- accel/accel.sh@21 -- # val= 00:06:03.445 06:43:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # IFS=: 00:06:03.445 06:43:17 -- accel/accel.sh@20 -- # read -r var val 00:06:03.445 06:43:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:03.445 06:43:17 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:03.445 06:43:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.445 00:06:03.445 real 0m2.987s 00:06:03.445 user 0m2.678s 00:06:03.445 sys 0m0.301s 00:06:03.445 06:43:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.445 06:43:17 -- common/autotest_common.sh@10 -- # set +x 00:06:03.445 ************************************ 00:06:03.445 END TEST accel_xor 00:06:03.445 ************************************ 00:06:03.445 06:43:17 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:03.445 06:43:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:03.445 06:43:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.445 06:43:17 -- common/autotest_common.sh@10 -- # set +x 00:06:03.445 ************************************ 00:06:03.445 START TEST accel_dif_verify 00:06:03.445 ************************************ 00:06:03.445 06:43:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:03.445 06:43:17 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.445 06:43:17 -- accel/accel.sh@17 -- # local accel_module 00:06:03.445 06:43:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:03.445 06:43:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:03.445 06:43:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.445 06:43:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.445 06:43:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.445 06:43:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.445 06:43:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.445 06:43:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.445 06:43:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.445 06:43:17 -- accel/accel.sh@42 -- # jq -r . 00:06:03.445 [2024-05-15 06:43:17.450873] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:03.445 [2024-05-15 06:43:17.450961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid386999 ] 00:06:03.445 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.445 [2024-05-15 06:43:17.526531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.445 [2024-05-15 06:43:17.646769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.819 06:43:18 -- accel/accel.sh@18 -- # out=' 00:06:04.819 SPDK Configuration: 00:06:04.819 Core mask: 0x1 00:06:04.819 00:06:04.819 Accel Perf Configuration: 00:06:04.819 Workload Type: dif_verify 00:06:04.819 Vector size: 4096 bytes 00:06:04.819 Transfer size: 4096 bytes 00:06:04.819 Block size: 512 bytes 00:06:04.819 Metadata size: 8 bytes 00:06:04.819 Vector count 1 00:06:04.819 Module: software 00:06:04.819 Queue depth: 32 00:06:04.819 Allocate depth: 32 00:06:04.819 # threads/core: 1 00:06:04.819 Run time: 1 seconds 00:06:04.819 Verify: No 00:06:04.819 00:06:04.819 Running for 1 seconds... 00:06:04.819 00:06:04.819 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.819 ------------------------------------------------------------------------------------ 00:06:04.819 0,0 81984/s 325 MiB/s 0 0 00:06:04.819 ==================================================================================== 00:06:04.819 Total 81984/s 320 MiB/s 0 0' 00:06:04.819 06:43:18 -- accel/accel.sh@20 -- # IFS=: 00:06:04.819 06:43:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:04.819 06:43:18 -- accel/accel.sh@20 -- # read -r var val 00:06:04.819 06:43:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:04.819 06:43:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.819 06:43:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.819 06:43:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.819 06:43:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.819 06:43:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.819 06:43:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.819 06:43:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.819 06:43:18 -- accel/accel.sh@42 -- # jq -r . 00:06:04.819 [2024-05-15 06:43:18.941681] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:04.819 [2024-05-15 06:43:18.941763] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387143 ] 00:06:04.819 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.819 [2024-05-15 06:43:19.011383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.077 [2024-05-15 06:43:19.131602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.077 06:43:19 -- accel/accel.sh@21 -- # val= 00:06:05.077 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.077 06:43:19 -- accel/accel.sh@21 -- # val= 00:06:05.077 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.077 06:43:19 -- accel/accel.sh@21 -- # val=0x1 00:06:05.077 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.077 06:43:19 -- accel/accel.sh@21 -- # val= 00:06:05.077 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.077 06:43:19 -- accel/accel.sh@21 -- # val= 00:06:05.077 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.077 06:43:19 -- accel/accel.sh@21 -- # val=dif_verify 00:06:05.077 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.077 06:43:19 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.077 06:43:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:05.077 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.077 06:43:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:05.077 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.077 06:43:19 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:05.077 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.077 06:43:19 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:05.077 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.077 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.078 06:43:19 -- accel/accel.sh@21 -- # val= 00:06:05.078 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.078 06:43:19 -- accel/accel.sh@21 -- # val=software 00:06:05.078 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.078 06:43:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.078 06:43:19 -- accel/accel.sh@21 -- # val=32 00:06:05.078 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.078 06:43:19 -- accel/accel.sh@21 -- # val=32 00:06:05.078 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.078 06:43:19 -- accel/accel.sh@21 -- # val=1 00:06:05.078 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.078 06:43:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:05.078 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.078 06:43:19 -- accel/accel.sh@21 -- # val=No 00:06:05.078 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.078 06:43:19 -- accel/accel.sh@21 -- # val= 00:06:05.078 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:05.078 06:43:19 -- accel/accel.sh@21 -- # val= 00:06:05.078 06:43:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # IFS=: 00:06:05.078 06:43:19 -- accel/accel.sh@20 -- # read -r var val 00:06:06.449 06:43:20 -- accel/accel.sh@21 -- # val= 00:06:06.449 06:43:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.449 06:43:20 -- accel/accel.sh@21 -- # val= 00:06:06.449 06:43:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.449 06:43:20 -- accel/accel.sh@21 -- # val= 00:06:06.449 06:43:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.449 06:43:20 -- accel/accel.sh@21 -- # val= 00:06:06.449 06:43:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.449 06:43:20 -- accel/accel.sh@21 -- # val= 00:06:06.449 06:43:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.449 06:43:20 -- accel/accel.sh@21 -- # val= 00:06:06.449 06:43:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # IFS=: 00:06:06.449 06:43:20 -- accel/accel.sh@20 -- # read -r var val 00:06:06.449 06:43:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.449 06:43:20 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:06.449 06:43:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.449 00:06:06.449 real 0m2.975s 00:06:06.449 user 0m2.665s 00:06:06.449 sys 0m0.304s 00:06:06.449 06:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.449 06:43:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.449 ************************************ 00:06:06.449 END TEST accel_dif_verify 00:06:06.449 ************************************ 00:06:06.449 06:43:20 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:06.449 06:43:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:06.449 06:43:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.449 06:43:20 -- common/autotest_common.sh@10 -- # set +x 00:06:06.449 ************************************ 00:06:06.449 START TEST accel_dif_generate 00:06:06.449 ************************************ 00:06:06.449 06:43:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:06.449 06:43:20 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.449 06:43:20 -- accel/accel.sh@17 -- # local accel_module 00:06:06.449 06:43:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:06.449 06:43:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:06.449 06:43:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.449 06:43:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.449 06:43:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.449 06:43:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.449 06:43:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.449 06:43:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.449 06:43:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.449 06:43:20 -- accel/accel.sh@42 -- # jq -r . 00:06:06.449 [2024-05-15 06:43:20.456040] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:06.450 [2024-05-15 06:43:20.456141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387308 ] 00:06:06.450 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.450 [2024-05-15 06:43:20.531778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.450 [2024-05-15 06:43:20.650837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.848 06:43:21 -- accel/accel.sh@18 -- # out=' 00:06:07.848 SPDK Configuration: 00:06:07.849 Core mask: 0x1 00:06:07.849 00:06:07.849 Accel Perf Configuration: 00:06:07.849 Workload Type: dif_generate 00:06:07.849 Vector size: 4096 bytes 00:06:07.849 Transfer size: 4096 bytes 00:06:07.849 Block size: 512 bytes 00:06:07.849 Metadata size: 8 bytes 00:06:07.849 Vector count 1 00:06:07.849 Module: software 00:06:07.849 Queue depth: 32 00:06:07.849 Allocate depth: 32 00:06:07.849 # threads/core: 1 00:06:07.849 Run time: 1 seconds 00:06:07.849 Verify: No 00:06:07.849 00:06:07.849 Running for 1 seconds... 00:06:07.849 00:06:07.849 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.849 ------------------------------------------------------------------------------------ 00:06:07.849 0,0 96288/s 382 MiB/s 0 0 00:06:07.849 ==================================================================================== 00:06:07.849 Total 96288/s 376 MiB/s 0 0' 00:06:07.849 06:43:21 -- accel/accel.sh@20 -- # IFS=: 00:06:07.849 06:43:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:07.849 06:43:21 -- accel/accel.sh@20 -- # read -r var val 00:06:07.849 06:43:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:07.849 06:43:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.849 06:43:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.849 06:43:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.849 06:43:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.849 06:43:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.849 06:43:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.849 06:43:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.849 06:43:21 -- accel/accel.sh@42 -- # jq -r . 00:06:07.849 [2024-05-15 06:43:21.944837] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:07.849 [2024-05-15 06:43:21.944918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387587 ] 00:06:07.849 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.849 [2024-05-15 06:43:22.017798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.107 [2024-05-15 06:43:22.138212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val= 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val= 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val=0x1 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val= 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val= 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val=dif_generate 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val= 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val=software 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val=32 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val=32 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val=1 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val=No 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val= 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:08.107 06:43:22 -- accel/accel.sh@21 -- # val= 00:06:08.107 06:43:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # IFS=: 00:06:08.107 06:43:22 -- accel/accel.sh@20 -- # read -r var val 00:06:09.481 06:43:23 -- accel/accel.sh@21 -- # val= 00:06:09.481 06:43:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # IFS=: 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # read -r var val 00:06:09.481 06:43:23 -- accel/accel.sh@21 -- # val= 00:06:09.481 06:43:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # IFS=: 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # read -r var val 00:06:09.481 06:43:23 -- accel/accel.sh@21 -- # val= 00:06:09.481 06:43:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # IFS=: 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # read -r var val 00:06:09.481 06:43:23 -- accel/accel.sh@21 -- # val= 00:06:09.481 06:43:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # IFS=: 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # read -r var val 00:06:09.481 06:43:23 -- accel/accel.sh@21 -- # val= 00:06:09.481 06:43:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # IFS=: 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # read -r var val 00:06:09.481 06:43:23 -- accel/accel.sh@21 -- # val= 00:06:09.481 06:43:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # IFS=: 00:06:09.481 06:43:23 -- accel/accel.sh@20 -- # read -r var val 00:06:09.481 06:43:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.481 06:43:23 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:09.481 06:43:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.481 00:06:09.481 real 0m2.987s 00:06:09.481 user 0m2.668s 00:06:09.481 sys 0m0.313s 00:06:09.481 06:43:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.481 06:43:23 -- common/autotest_common.sh@10 -- # set +x 00:06:09.481 ************************************ 00:06:09.481 END TEST accel_dif_generate 00:06:09.481 ************************************ 00:06:09.481 06:43:23 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:09.481 06:43:23 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:09.481 06:43:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.481 06:43:23 -- common/autotest_common.sh@10 -- # set +x 00:06:09.481 ************************************ 00:06:09.481 START TEST accel_dif_generate_copy 00:06:09.481 ************************************ 00:06:09.481 06:43:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:09.481 06:43:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.481 06:43:23 -- accel/accel.sh@17 -- # local accel_module 00:06:09.481 06:43:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:09.481 06:43:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:09.481 06:43:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.481 06:43:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.481 06:43:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.481 06:43:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.481 06:43:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.481 06:43:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.481 06:43:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.481 06:43:23 -- accel/accel.sh@42 -- # jq -r . 00:06:09.481 [2024-05-15 06:43:23.465857] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:09.481 [2024-05-15 06:43:23.465947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387748 ] 00:06:09.481 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.481 [2024-05-15 06:43:23.533081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.481 [2024-05-15 06:43:23.651244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.855 06:43:24 -- accel/accel.sh@18 -- # out=' 00:06:10.855 SPDK Configuration: 00:06:10.855 Core mask: 0x1 00:06:10.855 00:06:10.855 Accel Perf Configuration: 00:06:10.855 Workload Type: dif_generate_copy 00:06:10.855 Vector size: 4096 bytes 00:06:10.855 Transfer size: 4096 bytes 00:06:10.855 Vector count 1 00:06:10.855 Module: software 00:06:10.855 Queue depth: 32 00:06:10.855 Allocate depth: 32 00:06:10.855 # threads/core: 1 00:06:10.855 Run time: 1 seconds 00:06:10.855 Verify: No 00:06:10.855 00:06:10.855 Running for 1 seconds... 00:06:10.855 00:06:10.855 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.855 ------------------------------------------------------------------------------------ 00:06:10.855 0,0 76256/s 302 MiB/s 0 0 00:06:10.855 ==================================================================================== 00:06:10.855 Total 76256/s 297 MiB/s 0 0' 00:06:10.855 06:43:24 -- accel/accel.sh@20 -- # IFS=: 00:06:10.855 06:43:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:10.855 06:43:24 -- accel/accel.sh@20 -- # read -r var val 00:06:10.855 06:43:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:10.855 06:43:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.855 06:43:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.855 06:43:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.855 06:43:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.855 06:43:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.855 06:43:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.855 06:43:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.855 06:43:24 -- accel/accel.sh@42 -- # jq -r . 00:06:10.855 [2024-05-15 06:43:24.934313] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:10.855 [2024-05-15 06:43:24.934394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid387896 ] 00:06:10.856 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.856 [2024-05-15 06:43:25.007191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.114 [2024-05-15 06:43:25.128181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val= 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val= 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val=0x1 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val= 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val= 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val= 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val=software 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val=32 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val=32 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val=1 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val=No 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val= 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:11.114 06:43:25 -- accel/accel.sh@21 -- # val= 00:06:11.114 06:43:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # IFS=: 00:06:11.114 06:43:25 -- accel/accel.sh@20 -- # read -r var val 00:06:12.489 06:43:26 -- accel/accel.sh@21 -- # val= 00:06:12.489 06:43:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.489 06:43:26 -- accel/accel.sh@21 -- # val= 00:06:12.489 06:43:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.489 06:43:26 -- accel/accel.sh@21 -- # val= 00:06:12.489 06:43:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.489 06:43:26 -- accel/accel.sh@21 -- # val= 00:06:12.489 06:43:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.489 06:43:26 -- accel/accel.sh@21 -- # val= 00:06:12.489 06:43:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.489 06:43:26 -- accel/accel.sh@21 -- # val= 00:06:12.489 06:43:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # IFS=: 00:06:12.489 06:43:26 -- accel/accel.sh@20 -- # read -r var val 00:06:12.489 06:43:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.489 06:43:26 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:12.489 06:43:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.489 00:06:12.489 real 0m2.962s 00:06:12.489 user 0m2.660s 00:06:12.489 sys 0m0.294s 00:06:12.489 06:43:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.489 06:43:26 -- common/autotest_common.sh@10 -- # set +x 00:06:12.489 ************************************ 00:06:12.489 END TEST accel_dif_generate_copy 00:06:12.489 ************************************ 00:06:12.489 06:43:26 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:12.489 06:43:26 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.489 06:43:26 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:12.489 06:43:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.489 06:43:26 -- common/autotest_common.sh@10 -- # set +x 00:06:12.489 ************************************ 00:06:12.489 START TEST accel_comp 00:06:12.489 ************************************ 00:06:12.489 06:43:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.489 06:43:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.489 06:43:26 -- accel/accel.sh@17 -- # local accel_module 00:06:12.489 06:43:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.489 06:43:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:12.489 06:43:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.489 06:43:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.489 06:43:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.489 06:43:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.489 06:43:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.489 06:43:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.489 06:43:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.489 06:43:26 -- accel/accel.sh@42 -- # jq -r . 00:06:12.489 [2024-05-15 06:43:26.454287] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:12.489 [2024-05-15 06:43:26.454380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388174 ] 00:06:12.489 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.489 [2024-05-15 06:43:26.529377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.489 [2024-05-15 06:43:26.649611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.864 06:43:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:13.864 00:06:13.864 SPDK Configuration: 00:06:13.864 Core mask: 0x1 00:06:13.864 00:06:13.864 Accel Perf Configuration: 00:06:13.864 Workload Type: compress 00:06:13.864 Transfer size: 4096 bytes 00:06:13.864 Vector count 1 00:06:13.864 Module: software 00:06:13.864 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:13.864 Queue depth: 32 00:06:13.864 Allocate depth: 32 00:06:13.864 # threads/core: 1 00:06:13.864 Run time: 1 seconds 00:06:13.864 Verify: No 00:06:13.864 00:06:13.864 Running for 1 seconds... 00:06:13.864 00:06:13.864 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:13.864 ------------------------------------------------------------------------------------ 00:06:13.864 0,0 32320/s 134 MiB/s 0 0 00:06:13.864 ==================================================================================== 00:06:13.864 Total 32320/s 126 MiB/s 0 0' 00:06:13.864 06:43:27 -- accel/accel.sh@20 -- # IFS=: 00:06:13.864 06:43:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:13.864 06:43:27 -- accel/accel.sh@20 -- # read -r var val 00:06:13.864 06:43:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:13.864 06:43:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.864 06:43:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.864 06:43:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.864 06:43:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.864 06:43:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.864 06:43:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.864 06:43:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.864 06:43:27 -- accel/accel.sh@42 -- # jq -r . 00:06:13.864 [2024-05-15 06:43:27.957634] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:13.864 [2024-05-15 06:43:27.957715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388314 ] 00:06:13.864 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.864 [2024-05-15 06:43:28.031174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.123 [2024-05-15 06:43:28.150756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val= 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val= 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val= 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val=0x1 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val= 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val= 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val=compress 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val= 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val=software 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val=32 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val=32 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val=1 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val=No 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val= 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:14.123 06:43:28 -- accel/accel.sh@21 -- # val= 00:06:14.123 06:43:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # IFS=: 00:06:14.123 06:43:28 -- accel/accel.sh@20 -- # read -r var val 00:06:15.497 06:43:29 -- accel/accel.sh@21 -- # val= 00:06:15.497 06:43:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.497 06:43:29 -- accel/accel.sh@21 -- # val= 00:06:15.497 06:43:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.497 06:43:29 -- accel/accel.sh@21 -- # val= 00:06:15.497 06:43:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.497 06:43:29 -- accel/accel.sh@21 -- # val= 00:06:15.497 06:43:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.497 06:43:29 -- accel/accel.sh@21 -- # val= 00:06:15.497 06:43:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.497 06:43:29 -- accel/accel.sh@21 -- # val= 00:06:15.497 06:43:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # IFS=: 00:06:15.497 06:43:29 -- accel/accel.sh@20 -- # read -r var val 00:06:15.497 06:43:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.497 06:43:29 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:15.497 06:43:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.497 00:06:15.497 real 0m2.994s 00:06:15.497 user 0m2.678s 00:06:15.497 sys 0m0.308s 00:06:15.497 06:43:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.497 06:43:29 -- common/autotest_common.sh@10 -- # set +x 00:06:15.497 ************************************ 00:06:15.497 END TEST accel_comp 00:06:15.497 ************************************ 00:06:15.497 06:43:29 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.497 06:43:29 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:15.497 06:43:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.497 06:43:29 -- common/autotest_common.sh@10 -- # set +x 00:06:15.497 ************************************ 00:06:15.497 START TEST accel_decomp 00:06:15.497 ************************************ 00:06:15.497 06:43:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.497 06:43:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.497 06:43:29 -- accel/accel.sh@17 -- # local accel_module 00:06:15.497 06:43:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.497 06:43:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:15.497 06:43:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.497 06:43:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.497 06:43:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.497 06:43:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.497 06:43:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.497 06:43:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.497 06:43:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.497 06:43:29 -- accel/accel.sh@42 -- # jq -r . 00:06:15.497 [2024-05-15 06:43:29.472012] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:15.497 [2024-05-15 06:43:29.472090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388482 ] 00:06:15.497 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.497 [2024-05-15 06:43:29.551894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.497 [2024-05-15 06:43:29.672682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.879 06:43:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:16.879 00:06:16.879 SPDK Configuration: 00:06:16.879 Core mask: 0x1 00:06:16.879 00:06:16.879 Accel Perf Configuration: 00:06:16.879 Workload Type: decompress 00:06:16.879 Transfer size: 4096 bytes 00:06:16.879 Vector count 1 00:06:16.879 Module: software 00:06:16.879 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:16.879 Queue depth: 32 00:06:16.879 Allocate depth: 32 00:06:16.879 # threads/core: 1 00:06:16.879 Run time: 1 seconds 00:06:16.879 Verify: Yes 00:06:16.879 00:06:16.879 Running for 1 seconds... 00:06:16.879 00:06:16.879 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.879 ------------------------------------------------------------------------------------ 00:06:16.879 0,0 55360/s 102 MiB/s 0 0 00:06:16.879 ==================================================================================== 00:06:16.879 Total 55360/s 216 MiB/s 0 0' 00:06:16.879 06:43:30 -- accel/accel.sh@20 -- # IFS=: 00:06:16.879 06:43:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:16.879 06:43:30 -- accel/accel.sh@20 -- # read -r var val 00:06:16.879 06:43:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:16.879 06:43:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.879 06:43:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.879 06:43:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.879 06:43:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.879 06:43:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.879 06:43:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.879 06:43:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.879 06:43:30 -- accel/accel.sh@42 -- # jq -r . 00:06:16.879 [2024-05-15 06:43:30.971254] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:16.879 [2024-05-15 06:43:30.971334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388721 ] 00:06:16.879 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.879 [2024-05-15 06:43:31.047466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.137 [2024-05-15 06:43:31.168258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val= 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val= 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val= 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val=0x1 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val= 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val= 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val=decompress 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val= 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val=software 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val=32 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val=32 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val=1 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val=Yes 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val= 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:17.137 06:43:31 -- accel/accel.sh@21 -- # val= 00:06:17.137 06:43:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # IFS=: 00:06:17.137 06:43:31 -- accel/accel.sh@20 -- # read -r var val 00:06:18.511 06:43:32 -- accel/accel.sh@21 -- # val= 00:06:18.511 06:43:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.511 06:43:32 -- accel/accel.sh@21 -- # val= 00:06:18.511 06:43:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.511 06:43:32 -- accel/accel.sh@21 -- # val= 00:06:18.511 06:43:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.511 06:43:32 -- accel/accel.sh@21 -- # val= 00:06:18.511 06:43:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.511 06:43:32 -- accel/accel.sh@21 -- # val= 00:06:18.511 06:43:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.511 06:43:32 -- accel/accel.sh@21 -- # val= 00:06:18.511 06:43:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # IFS=: 00:06:18.511 06:43:32 -- accel/accel.sh@20 -- # read -r var val 00:06:18.511 06:43:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.511 06:43:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:18.511 06:43:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.511 00:06:18.511 real 0m3.003s 00:06:18.511 user 0m2.668s 00:06:18.511 sys 0m0.327s 00:06:18.511 06:43:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.511 06:43:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.511 ************************************ 00:06:18.511 END TEST accel_decomp 00:06:18.511 ************************************ 00:06:18.511 06:43:32 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:18.511 06:43:32 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:18.511 06:43:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.511 06:43:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.511 ************************************ 00:06:18.511 START TEST accel_decmop_full 00:06:18.511 ************************************ 00:06:18.511 06:43:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:18.511 06:43:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.511 06:43:32 -- accel/accel.sh@17 -- # local accel_module 00:06:18.511 06:43:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:18.511 06:43:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:18.511 06:43:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.511 06:43:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.511 06:43:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.511 06:43:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.511 06:43:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.511 06:43:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.511 06:43:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.511 06:43:32 -- accel/accel.sh@42 -- # jq -r . 00:06:18.511 [2024-05-15 06:43:32.502084] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:18.511 [2024-05-15 06:43:32.502161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388901 ] 00:06:18.511 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.511 [2024-05-15 06:43:32.580077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.511 [2024-05-15 06:43:32.700488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.883 06:43:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:19.883 00:06:19.883 SPDK Configuration: 00:06:19.883 Core mask: 0x1 00:06:19.883 00:06:19.883 Accel Perf Configuration: 00:06:19.883 Workload Type: decompress 00:06:19.883 Transfer size: 111250 bytes 00:06:19.883 Vector count 1 00:06:19.883 Module: software 00:06:19.883 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:19.883 Queue depth: 32 00:06:19.883 Allocate depth: 32 00:06:19.883 # threads/core: 1 00:06:19.883 Run time: 1 seconds 00:06:19.883 Verify: Yes 00:06:19.883 00:06:19.883 Running for 1 seconds... 00:06:19.883 00:06:19.883 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.883 ------------------------------------------------------------------------------------ 00:06:19.883 0,0 3808/s 157 MiB/s 0 0 00:06:19.883 ==================================================================================== 00:06:19.883 Total 3808/s 404 MiB/s 0 0' 00:06:19.883 06:43:33 -- accel/accel.sh@20 -- # IFS=: 00:06:19.883 06:43:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:19.883 06:43:33 -- accel/accel.sh@20 -- # read -r var val 00:06:19.883 06:43:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:19.883 06:43:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.883 06:43:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.883 06:43:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.883 06:43:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.883 06:43:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.883 06:43:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.883 06:43:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.883 06:43:33 -- accel/accel.sh@42 -- # jq -r . 00:06:19.883 [2024-05-15 06:43:34.012709] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:19.883 [2024-05-15 06:43:34.012790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389046 ] 00:06:19.883 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.883 [2024-05-15 06:43:34.085521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.141 [2024-05-15 06:43:34.203372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val= 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val= 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val= 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val=0x1 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val= 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val= 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val=decompress 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val= 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val=software 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.141 06:43:34 -- accel/accel.sh@21 -- # val=32 00:06:20.141 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.141 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.142 06:43:34 -- accel/accel.sh@21 -- # val=32 00:06:20.142 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.142 06:43:34 -- accel/accel.sh@21 -- # val=1 00:06:20.142 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.142 06:43:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.142 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.142 06:43:34 -- accel/accel.sh@21 -- # val=Yes 00:06:20.142 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.142 06:43:34 -- accel/accel.sh@21 -- # val= 00:06:20.142 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:20.142 06:43:34 -- accel/accel.sh@21 -- # val= 00:06:20.142 06:43:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # IFS=: 00:06:20.142 06:43:34 -- accel/accel.sh@20 -- # read -r var val 00:06:21.515 06:43:35 -- accel/accel.sh@21 -- # val= 00:06:21.515 06:43:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.515 06:43:35 -- accel/accel.sh@21 -- # val= 00:06:21.515 06:43:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.515 06:43:35 -- accel/accel.sh@21 -- # val= 00:06:21.515 06:43:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.515 06:43:35 -- accel/accel.sh@21 -- # val= 00:06:21.515 06:43:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.515 06:43:35 -- accel/accel.sh@21 -- # val= 00:06:21.515 06:43:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.515 06:43:35 -- accel/accel.sh@21 -- # val= 00:06:21.515 06:43:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # IFS=: 00:06:21.515 06:43:35 -- accel/accel.sh@20 -- # read -r var val 00:06:21.515 06:43:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.515 06:43:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:21.515 06:43:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.515 00:06:21.515 real 0m2.996s 00:06:21.515 user 0m2.693s 00:06:21.515 sys 0m0.295s 00:06:21.515 06:43:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.515 06:43:35 -- common/autotest_common.sh@10 -- # set +x 00:06:21.515 ************************************ 00:06:21.515 END TEST accel_decmop_full 00:06:21.515 ************************************ 00:06:21.515 06:43:35 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.515 06:43:35 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:21.515 06:43:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.515 06:43:35 -- common/autotest_common.sh@10 -- # set +x 00:06:21.515 ************************************ 00:06:21.515 START TEST accel_decomp_mcore 00:06:21.515 ************************************ 00:06:21.515 06:43:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.515 06:43:35 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.515 06:43:35 -- accel/accel.sh@17 -- # local accel_module 00:06:21.515 06:43:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.515 06:43:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:21.515 06:43:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.515 06:43:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.515 06:43:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.515 06:43:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.515 06:43:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.515 06:43:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.515 06:43:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.515 06:43:35 -- accel/accel.sh@42 -- # jq -r . 00:06:21.515 [2024-05-15 06:43:35.524659] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:21.515 [2024-05-15 06:43:35.524756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389313 ] 00:06:21.515 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.515 [2024-05-15 06:43:35.602870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.515 [2024-05-15 06:43:35.726602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.515 [2024-05-15 06:43:35.726659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.515 [2024-05-15 06:43:35.726714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.515 [2024-05-15 06:43:35.726718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.889 06:43:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:22.889 00:06:22.889 SPDK Configuration: 00:06:22.889 Core mask: 0xf 00:06:22.889 00:06:22.889 Accel Perf Configuration: 00:06:22.889 Workload Type: decompress 00:06:22.889 Transfer size: 4096 bytes 00:06:22.889 Vector count 1 00:06:22.889 Module: software 00:06:22.889 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.889 Queue depth: 32 00:06:22.889 Allocate depth: 32 00:06:22.889 # threads/core: 1 00:06:22.889 Run time: 1 seconds 00:06:22.889 Verify: Yes 00:06:22.889 00:06:22.889 Running for 1 seconds... 00:06:22.889 00:06:22.889 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.889 ------------------------------------------------------------------------------------ 00:06:22.889 0,0 50400/s 92 MiB/s 0 0 00:06:22.889 3,0 50720/s 93 MiB/s 0 0 00:06:22.889 2,0 50784/s 93 MiB/s 0 0 00:06:22.889 1,0 50592/s 93 MiB/s 0 0 00:06:22.889 ==================================================================================== 00:06:22.889 Total 202496/s 791 MiB/s 0 0' 00:06:22.889 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:22.889 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:22.889 06:43:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:22.889 06:43:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:22.889 06:43:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.889 06:43:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.889 06:43:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.889 06:43:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.889 06:43:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.889 06:43:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.889 06:43:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.889 06:43:37 -- accel/accel.sh@42 -- # jq -r . 00:06:22.889 [2024-05-15 06:43:37.041296] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:22.889 [2024-05-15 06:43:37.041381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389472 ] 00:06:22.889 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.889 [2024-05-15 06:43:37.115213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.149 [2024-05-15 06:43:37.239091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.149 [2024-05-15 06:43:37.239145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.149 [2024-05-15 06:43:37.239196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.149 [2024-05-15 06:43:37.239199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val= 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val= 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val= 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val=0xf 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val= 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val= 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val=decompress 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val= 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val=software 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val=32 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val=32 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val=1 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val=Yes 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val= 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:23.149 06:43:37 -- accel/accel.sh@21 -- # val= 00:06:23.149 06:43:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # IFS=: 00:06:23.149 06:43:37 -- accel/accel.sh@20 -- # read -r var val 00:06:24.560 06:43:38 -- accel/accel.sh@21 -- # val= 00:06:24.560 06:43:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.560 06:43:38 -- accel/accel.sh@21 -- # val= 00:06:24.560 06:43:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.560 06:43:38 -- accel/accel.sh@21 -- # val= 00:06:24.560 06:43:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.560 06:43:38 -- accel/accel.sh@21 -- # val= 00:06:24.560 06:43:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.560 06:43:38 -- accel/accel.sh@21 -- # val= 00:06:24.560 06:43:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.560 06:43:38 -- accel/accel.sh@21 -- # val= 00:06:24.560 06:43:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.560 06:43:38 -- accel/accel.sh@21 -- # val= 00:06:24.560 06:43:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.560 06:43:38 -- accel/accel.sh@21 -- # val= 00:06:24.560 06:43:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.560 06:43:38 -- accel/accel.sh@21 -- # val= 00:06:24.560 06:43:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # IFS=: 00:06:24.560 06:43:38 -- accel/accel.sh@20 -- # read -r var val 00:06:24.560 06:43:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.560 06:43:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:24.560 06:43:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.560 00:06:24.560 real 0m3.024s 00:06:24.560 user 0m9.648s 00:06:24.560 sys 0m0.317s 00:06:24.560 06:43:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.560 06:43:38 -- common/autotest_common.sh@10 -- # set +x 00:06:24.560 ************************************ 00:06:24.560 END TEST accel_decomp_mcore 00:06:24.560 ************************************ 00:06:24.560 06:43:38 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:24.560 06:43:38 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:24.560 06:43:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.560 06:43:38 -- common/autotest_common.sh@10 -- # set +x 00:06:24.560 ************************************ 00:06:24.560 START TEST accel_decomp_full_mcore 00:06:24.560 ************************************ 00:06:24.560 06:43:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:24.560 06:43:38 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.560 06:43:38 -- accel/accel.sh@17 -- # local accel_module 00:06:24.560 06:43:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:24.560 06:43:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:24.560 06:43:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.560 06:43:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.560 06:43:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.560 06:43:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.560 06:43:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.560 06:43:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.560 06:43:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.560 06:43:38 -- accel/accel.sh@42 -- # jq -r . 00:06:24.560 [2024-05-15 06:43:38.574462] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:24.560 [2024-05-15 06:43:38.574543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389640 ] 00:06:24.560 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.560 [2024-05-15 06:43:38.649898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:24.560 [2024-05-15 06:43:38.773629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.560 [2024-05-15 06:43:38.773683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.560 [2024-05-15 06:43:38.773734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.560 [2024-05-15 06:43:38.773738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.933 06:43:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:25.933 00:06:25.933 SPDK Configuration: 00:06:25.933 Core mask: 0xf 00:06:25.933 00:06:25.933 Accel Perf Configuration: 00:06:25.933 Workload Type: decompress 00:06:25.933 Transfer size: 111250 bytes 00:06:25.933 Vector count 1 00:06:25.933 Module: software 00:06:25.933 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.933 Queue depth: 32 00:06:25.933 Allocate depth: 32 00:06:25.933 # threads/core: 1 00:06:25.933 Run time: 1 seconds 00:06:25.933 Verify: Yes 00:06:25.933 00:06:25.933 Running for 1 seconds... 00:06:25.933 00:06:25.933 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.933 ------------------------------------------------------------------------------------ 00:06:25.933 0,0 3776/s 155 MiB/s 0 0 00:06:25.933 3,0 3776/s 155 MiB/s 0 0 00:06:25.933 2,0 3776/s 155 MiB/s 0 0 00:06:25.933 1,0 3776/s 155 MiB/s 0 0 00:06:25.933 ==================================================================================== 00:06:25.933 Total 15104/s 1602 MiB/s 0 0' 00:06:25.933 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:25.933 06:43:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:25.933 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:25.933 06:43:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:25.933 06:43:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.933 06:43:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.933 06:43:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.933 06:43:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.933 06:43:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.933 06:43:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.933 06:43:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.933 06:43:40 -- accel/accel.sh@42 -- # jq -r . 00:06:25.933 [2024-05-15 06:43:40.083842] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:25.933 [2024-05-15 06:43:40.083979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389907 ] 00:06:25.933 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.191 [2024-05-15 06:43:40.170518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.191 [2024-05-15 06:43:40.289993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.191 [2024-05-15 06:43:40.290046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.191 [2024-05-15 06:43:40.290099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.191 [2024-05-15 06:43:40.290102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.191 06:43:40 -- accel/accel.sh@21 -- # val= 00:06:26.191 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.191 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.191 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.191 06:43:40 -- accel/accel.sh@21 -- # val= 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val= 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val=0xf 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val= 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val= 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val=decompress 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val= 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val=software 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val=32 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val=32 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val=1 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val=Yes 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val= 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:26.192 06:43:40 -- accel/accel.sh@21 -- # val= 00:06:26.192 06:43:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # IFS=: 00:06:26.192 06:43:40 -- accel/accel.sh@20 -- # read -r var val 00:06:27.564 06:43:41 -- accel/accel.sh@21 -- # val= 00:06:27.564 06:43:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.564 06:43:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.564 06:43:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.564 06:43:41 -- accel/accel.sh@21 -- # val= 00:06:27.564 06:43:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.564 06:43:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.564 06:43:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.564 06:43:41 -- accel/accel.sh@21 -- # val= 00:06:27.564 06:43:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.564 06:43:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.564 06:43:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.564 06:43:41 -- accel/accel.sh@21 -- # val= 00:06:27.564 06:43:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.564 06:43:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.564 06:43:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.564 06:43:41 -- accel/accel.sh@21 -- # val= 00:06:27.564 06:43:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.564 06:43:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.564 06:43:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.564 06:43:41 -- accel/accel.sh@21 -- # val= 00:06:27.565 06:43:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.565 06:43:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.565 06:43:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.565 06:43:41 -- accel/accel.sh@21 -- # val= 00:06:27.565 06:43:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.565 06:43:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.565 06:43:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.565 06:43:41 -- accel/accel.sh@21 -- # val= 00:06:27.565 06:43:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.565 06:43:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.565 06:43:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.565 06:43:41 -- accel/accel.sh@21 -- # val= 00:06:27.565 06:43:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.565 06:43:41 -- accel/accel.sh@20 -- # IFS=: 00:06:27.565 06:43:41 -- accel/accel.sh@20 -- # read -r var val 00:06:27.565 06:43:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.565 06:43:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:27.565 06:43:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.565 00:06:27.565 real 0m3.042s 00:06:27.565 user 0m9.656s 00:06:27.565 sys 0m0.357s 00:06:27.565 06:43:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.565 06:43:41 -- common/autotest_common.sh@10 -- # set +x 00:06:27.565 ************************************ 00:06:27.565 END TEST accel_decomp_full_mcore 00:06:27.565 ************************************ 00:06:27.565 06:43:41 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:27.565 06:43:41 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:27.565 06:43:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.565 06:43:41 -- common/autotest_common.sh@10 -- # set +x 00:06:27.565 ************************************ 00:06:27.565 START TEST accel_decomp_mthread 00:06:27.565 ************************************ 00:06:27.565 06:43:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:27.565 06:43:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.565 06:43:41 -- accel/accel.sh@17 -- # local accel_module 00:06:27.565 06:43:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:27.565 06:43:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:27.565 06:43:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.565 06:43:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.565 06:43:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.565 06:43:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.565 06:43:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.565 06:43:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.565 06:43:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.565 06:43:41 -- accel/accel.sh@42 -- # jq -r . 00:06:27.565 [2024-05-15 06:43:41.642835] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:27.565 [2024-05-15 06:43:41.642918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390070 ] 00:06:27.565 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.565 [2024-05-15 06:43:41.717012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.823 [2024-05-15 06:43:41.839482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.199 06:43:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:29.199 00:06:29.199 SPDK Configuration: 00:06:29.199 Core mask: 0x1 00:06:29.199 00:06:29.199 Accel Perf Configuration: 00:06:29.199 Workload Type: decompress 00:06:29.199 Transfer size: 4096 bytes 00:06:29.199 Vector count 1 00:06:29.199 Module: software 00:06:29.199 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.199 Queue depth: 32 00:06:29.199 Allocate depth: 32 00:06:29.199 # threads/core: 2 00:06:29.199 Run time: 1 seconds 00:06:29.199 Verify: Yes 00:06:29.199 00:06:29.199 Running for 1 seconds... 00:06:29.199 00:06:29.199 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.199 ------------------------------------------------------------------------------------ 00:06:29.199 0,1 24896/s 45 MiB/s 0 0 00:06:29.199 0,0 24768/s 45 MiB/s 0 0 00:06:29.199 ==================================================================================== 00:06:29.199 Total 49664/s 194 MiB/s 0 0' 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.199 06:43:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.199 06:43:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.199 06:43:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.199 06:43:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.199 06:43:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.199 06:43:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.199 06:43:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.199 06:43:43 -- accel/accel.sh@42 -- # jq -r . 00:06:29.199 [2024-05-15 06:43:43.143565] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:29.199 [2024-05-15 06:43:43.143647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390214 ] 00:06:29.199 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.199 [2024-05-15 06:43:43.216048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.199 [2024-05-15 06:43:43.338996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val= 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val= 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val= 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val=0x1 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val= 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val= 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val=decompress 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val= 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val=software 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val=32 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val=32 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val=2 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val=Yes 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val= 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:29.199 06:43:43 -- accel/accel.sh@21 -- # val= 00:06:29.199 06:43:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # IFS=: 00:06:29.199 06:43:43 -- accel/accel.sh@20 -- # read -r var val 00:06:30.572 06:43:44 -- accel/accel.sh@21 -- # val= 00:06:30.572 06:43:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.572 06:43:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.572 06:43:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.572 06:43:44 -- accel/accel.sh@21 -- # val= 00:06:30.572 06:43:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.572 06:43:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.572 06:43:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.572 06:43:44 -- accel/accel.sh@21 -- # val= 00:06:30.573 06:43:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.573 06:43:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.573 06:43:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.573 06:43:44 -- accel/accel.sh@21 -- # val= 00:06:30.573 06:43:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.573 06:43:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.573 06:43:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.573 06:43:44 -- accel/accel.sh@21 -- # val= 00:06:30.573 06:43:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.573 06:43:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.573 06:43:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.573 06:43:44 -- accel/accel.sh@21 -- # val= 00:06:30.573 06:43:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.573 06:43:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.573 06:43:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.573 06:43:44 -- accel/accel.sh@21 -- # val= 00:06:30.573 06:43:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.573 06:43:44 -- accel/accel.sh@20 -- # IFS=: 00:06:30.573 06:43:44 -- accel/accel.sh@20 -- # read -r var val 00:06:30.573 06:43:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.573 06:43:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:30.573 06:43:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.573 00:06:30.573 real 0m3.000s 00:06:30.573 user 0m2.675s 00:06:30.573 sys 0m0.316s 00:06:30.573 06:43:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.573 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:06:30.573 ************************************ 00:06:30.573 END TEST accel_decomp_mthread 00:06:30.573 ************************************ 00:06:30.573 06:43:44 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:30.573 06:43:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:30.573 06:43:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.573 06:43:44 -- common/autotest_common.sh@10 -- # set +x 00:06:30.573 ************************************ 00:06:30.573 START TEST accel_deomp_full_mthread 00:06:30.573 ************************************ 00:06:30.573 06:43:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:30.573 06:43:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.573 06:43:44 -- accel/accel.sh@17 -- # local accel_module 00:06:30.573 06:43:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:30.573 06:43:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:30.573 06:43:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.573 06:43:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.573 06:43:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.573 06:43:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.573 06:43:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.573 06:43:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.573 06:43:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.573 06:43:44 -- accel/accel.sh@42 -- # jq -r . 00:06:30.573 [2024-05-15 06:43:44.669449] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:30.573 [2024-05-15 06:43:44.669530] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390493 ] 00:06:30.573 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.573 [2024-05-15 06:43:44.743535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.831 [2024-05-15 06:43:44.864397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.205 06:43:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:32.205 00:06:32.205 SPDK Configuration: 00:06:32.205 Core mask: 0x1 00:06:32.205 00:06:32.205 Accel Perf Configuration: 00:06:32.205 Workload Type: decompress 00:06:32.205 Transfer size: 111250 bytes 00:06:32.205 Vector count 1 00:06:32.205 Module: software 00:06:32.205 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.205 Queue depth: 32 00:06:32.205 Allocate depth: 32 00:06:32.205 # threads/core: 2 00:06:32.205 Run time: 1 seconds 00:06:32.205 Verify: Yes 00:06:32.205 00:06:32.205 Running for 1 seconds... 00:06:32.205 00:06:32.205 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.205 ------------------------------------------------------------------------------------ 00:06:32.205 0,1 1952/s 80 MiB/s 0 0 00:06:32.205 0,0 1920/s 79 MiB/s 0 0 00:06:32.205 ==================================================================================== 00:06:32.205 Total 3872/s 410 MiB/s 0 0' 00:06:32.205 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.205 06:43:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:32.205 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.205 06:43:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:32.205 06:43:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.205 06:43:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.205 06:43:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.205 06:43:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.205 06:43:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.205 06:43:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.205 06:43:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.205 06:43:46 -- accel/accel.sh@42 -- # jq -r . 00:06:32.205 [2024-05-15 06:43:46.196893] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:32.205 [2024-05-15 06:43:46.196986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390637 ] 00:06:32.205 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.205 [2024-05-15 06:43:46.273660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.205 [2024-05-15 06:43:46.394625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val= 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val= 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val= 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val=0x1 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val= 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val= 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val=decompress 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val= 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val=software 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val=32 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val=32 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val=2 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val=Yes 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val= 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:32.464 06:43:46 -- accel/accel.sh@21 -- # val= 00:06:32.464 06:43:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # IFS=: 00:06:32.464 06:43:46 -- accel/accel.sh@20 -- # read -r var val 00:06:33.838 06:43:47 -- accel/accel.sh@21 -- # val= 00:06:33.838 06:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.838 06:43:47 -- accel/accel.sh@21 -- # val= 00:06:33.838 06:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.838 06:43:47 -- accel/accel.sh@21 -- # val= 00:06:33.838 06:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.838 06:43:47 -- accel/accel.sh@21 -- # val= 00:06:33.838 06:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.838 06:43:47 -- accel/accel.sh@21 -- # val= 00:06:33.838 06:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.838 06:43:47 -- accel/accel.sh@21 -- # val= 00:06:33.838 06:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.838 06:43:47 -- accel/accel.sh@21 -- # val= 00:06:33.838 06:43:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # IFS=: 00:06:33.838 06:43:47 -- accel/accel.sh@20 -- # read -r var val 00:06:33.838 06:43:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.838 06:43:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:33.838 06:43:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.838 00:06:33.838 real 0m3.068s 00:06:33.838 user 0m2.736s 00:06:33.838 sys 0m0.325s 00:06:33.838 06:43:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.838 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:06:33.838 ************************************ 00:06:33.838 END TEST accel_deomp_full_mthread 00:06:33.838 ************************************ 00:06:33.838 06:43:47 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:33.838 06:43:47 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:33.838 06:43:47 -- accel/accel.sh@129 -- # build_accel_config 00:06:33.838 06:43:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:33.838 06:43:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.838 06:43:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.838 06:43:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.838 06:43:47 -- common/autotest_common.sh@10 -- # set +x 00:06:33.838 06:43:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.838 06:43:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.838 06:43:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.838 06:43:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.838 06:43:47 -- accel/accel.sh@42 -- # jq -r . 00:06:33.838 ************************************ 00:06:33.838 START TEST accel_dif_functional_tests 00:06:33.838 ************************************ 00:06:33.839 06:43:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:33.839 [2024-05-15 06:43:47.783305] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:33.839 [2024-05-15 06:43:47.783380] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390802 ] 00:06:33.839 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.839 [2024-05-15 06:43:47.856473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.839 [2024-05-15 06:43:47.977875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.839 [2024-05-15 06:43:47.977952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.839 [2024-05-15 06:43:47.977956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.097 00:06:34.097 00:06:34.097 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.097 http://cunit.sourceforge.net/ 00:06:34.097 00:06:34.097 00:06:34.097 Suite: accel_dif 00:06:34.097 Test: verify: DIF generated, GUARD check ...passed 00:06:34.097 Test: verify: DIF generated, APPTAG check ...passed 00:06:34.097 Test: verify: DIF generated, REFTAG check ...passed 00:06:34.097 Test: verify: DIF not generated, GUARD check ...[2024-05-15 06:43:48.081394] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:34.097 [2024-05-15 06:43:48.081459] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:34.097 passed 00:06:34.097 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 06:43:48.081504] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:34.097 [2024-05-15 06:43:48.081535] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:34.097 passed 00:06:34.097 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 06:43:48.081571] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:34.097 [2024-05-15 06:43:48.081600] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:34.097 passed 00:06:34.097 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:34.097 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 06:43:48.081673] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:34.097 passed 00:06:34.097 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:34.097 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:34.097 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:34.097 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 06:43:48.081835] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:34.097 passed 00:06:34.097 Test: generate copy: DIF generated, GUARD check ...passed 00:06:34.097 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:34.097 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:34.097 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:34.097 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:34.097 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:34.097 Test: generate copy: iovecs-len validate ...[2024-05-15 06:43:48.082109] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:34.097 passed 00:06:34.097 Test: generate copy: buffer alignment validate ...passed 00:06:34.097 00:06:34.097 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.097 suites 1 1 n/a 0 0 00:06:34.097 tests 20 20 20 0 0 00:06:34.097 asserts 204 204 204 0 n/a 00:06:34.097 00:06:34.097 Elapsed time = 0.003 seconds 00:06:34.356 00:06:34.356 real 0m0.607s 00:06:34.356 user 0m0.921s 00:06:34.356 sys 0m0.189s 00:06:34.356 06:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.356 06:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:34.356 ************************************ 00:06:34.356 END TEST accel_dif_functional_tests 00:06:34.356 ************************************ 00:06:34.356 00:06:34.356 real 1m3.742s 00:06:34.356 user 1m11.156s 00:06:34.356 sys 0m7.676s 00:06:34.356 06:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.356 06:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:34.356 ************************************ 00:06:34.356 END TEST accel 00:06:34.356 ************************************ 00:06:34.356 06:43:48 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:34.356 06:43:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:34.356 06:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.356 06:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:34.356 ************************************ 00:06:34.356 START TEST accel_rpc 00:06:34.356 ************************************ 00:06:34.356 06:43:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:34.356 * Looking for test storage... 00:06:34.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:34.356 06:43:48 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:34.356 06:43:48 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=390987 00:06:34.356 06:43:48 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:34.356 06:43:48 -- accel/accel_rpc.sh@15 -- # waitforlisten 390987 00:06:34.356 06:43:48 -- common/autotest_common.sh@819 -- # '[' -z 390987 ']' 00:06:34.356 06:43:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.356 06:43:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.356 06:43:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.356 06:43:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.356 06:43:48 -- common/autotest_common.sh@10 -- # set +x 00:06:34.356 [2024-05-15 06:43:48.504393] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:34.356 [2024-05-15 06:43:48.504481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid390987 ] 00:06:34.356 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.356 [2024-05-15 06:43:48.572856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.614 [2024-05-15 06:43:48.685881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.614 [2024-05-15 06:43:48.686063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.548 06:43:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.548 06:43:49 -- common/autotest_common.sh@852 -- # return 0 00:06:35.548 06:43:49 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:35.548 06:43:49 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:35.548 06:43:49 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:35.548 06:43:49 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:35.548 06:43:49 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:35.548 06:43:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.548 06:43:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.548 06:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.548 ************************************ 00:06:35.548 START TEST accel_assign_opcode 00:06:35.548 ************************************ 00:06:35.548 06:43:49 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:35.548 06:43:49 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:35.548 06:43:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:35.548 06:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.548 [2024-05-15 06:43:49.428405] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:35.548 06:43:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:35.548 06:43:49 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:35.548 06:43:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:35.548 06:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.548 [2024-05-15 06:43:49.436420] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:35.548 06:43:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:35.548 06:43:49 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:35.548 06:43:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:35.548 06:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.548 06:43:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:35.548 06:43:49 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:35.548 06:43:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:35.548 06:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.548 06:43:49 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:35.549 06:43:49 -- accel/accel_rpc.sh@42 -- # grep software 00:06:35.549 06:43:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:35.549 software 00:06:35.549 00:06:35.549 real 0m0.303s 00:06:35.549 user 0m0.041s 00:06:35.549 sys 0m0.005s 00:06:35.549 06:43:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.549 06:43:49 -- common/autotest_common.sh@10 -- # set +x 00:06:35.549 ************************************ 00:06:35.549 END TEST accel_assign_opcode 00:06:35.549 ************************************ 00:06:35.549 06:43:49 -- accel/accel_rpc.sh@55 -- # killprocess 390987 00:06:35.549 06:43:49 -- common/autotest_common.sh@926 -- # '[' -z 390987 ']' 00:06:35.549 06:43:49 -- common/autotest_common.sh@930 -- # kill -0 390987 00:06:35.549 06:43:49 -- common/autotest_common.sh@931 -- # uname 00:06:35.549 06:43:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.549 06:43:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 390987 00:06:35.549 06:43:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.549 06:43:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.549 06:43:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 390987' 00:06:35.549 killing process with pid 390987 00:06:35.549 06:43:49 -- common/autotest_common.sh@945 -- # kill 390987 00:06:35.549 06:43:49 -- common/autotest_common.sh@950 -- # wait 390987 00:06:36.115 00:06:36.115 real 0m1.829s 00:06:36.115 user 0m1.927s 00:06:36.115 sys 0m0.442s 00:06:36.115 06:43:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.115 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.115 ************************************ 00:06:36.115 END TEST accel_rpc 00:06:36.115 ************************************ 00:06:36.115 06:43:50 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:36.115 06:43:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.115 06:43:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.115 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.115 ************************************ 00:06:36.115 START TEST app_cmdline 00:06:36.115 ************************************ 00:06:36.115 06:43:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:36.115 * Looking for test storage... 00:06:36.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:36.115 06:43:50 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:36.115 06:43:50 -- app/cmdline.sh@17 -- # spdk_tgt_pid=391325 00:06:36.115 06:43:50 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:36.115 06:43:50 -- app/cmdline.sh@18 -- # waitforlisten 391325 00:06:36.115 06:43:50 -- common/autotest_common.sh@819 -- # '[' -z 391325 ']' 00:06:36.115 06:43:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.115 06:43:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.115 06:43:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.115 06:43:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.115 06:43:50 -- common/autotest_common.sh@10 -- # set +x 00:06:36.374 [2024-05-15 06:43:50.357022] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:36.374 [2024-05-15 06:43:50.357108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391325 ] 00:06:36.374 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.374 [2024-05-15 06:43:50.453126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.374 [2024-05-15 06:43:50.603439] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.374 [2024-05-15 06:43:50.603679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.308 06:43:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.308 06:43:51 -- common/autotest_common.sh@852 -- # return 0 00:06:37.308 06:43:51 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:37.566 { 00:06:37.566 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:06:37.566 "fields": { 00:06:37.566 "major": 24, 00:06:37.566 "minor": 1, 00:06:37.566 "patch": 1, 00:06:37.566 "suffix": "-pre", 00:06:37.566 "commit": "36faa8c31" 00:06:37.566 } 00:06:37.566 } 00:06:37.566 06:43:51 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:37.566 06:43:51 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:37.566 06:43:51 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:37.566 06:43:51 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:37.566 06:43:51 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:37.566 06:43:51 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:37.566 06:43:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:37.566 06:43:51 -- common/autotest_common.sh@10 -- # set +x 00:06:37.566 06:43:51 -- app/cmdline.sh@26 -- # sort 00:06:37.566 06:43:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:37.566 06:43:51 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:37.566 06:43:51 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:37.566 06:43:51 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.566 06:43:51 -- common/autotest_common.sh@640 -- # local es=0 00:06:37.566 06:43:51 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.566 06:43:51 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.566 06:43:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.566 06:43:51 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.566 06:43:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.566 06:43:51 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.566 06:43:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.566 06:43:51 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:37.566 06:43:51 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:37.566 06:43:51 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:37.824 request: 00:06:37.824 { 00:06:37.824 "method": "env_dpdk_get_mem_stats", 00:06:37.824 "req_id": 1 00:06:37.824 } 00:06:37.824 Got JSON-RPC error response 00:06:37.824 response: 00:06:37.824 { 00:06:37.824 "code": -32601, 00:06:37.824 "message": "Method not found" 00:06:37.824 } 00:06:37.824 06:43:51 -- common/autotest_common.sh@643 -- # es=1 00:06:37.825 06:43:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:37.825 06:43:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:37.825 06:43:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:37.825 06:43:51 -- app/cmdline.sh@1 -- # killprocess 391325 00:06:37.825 06:43:51 -- common/autotest_common.sh@926 -- # '[' -z 391325 ']' 00:06:37.825 06:43:51 -- common/autotest_common.sh@930 -- # kill -0 391325 00:06:37.825 06:43:51 -- common/autotest_common.sh@931 -- # uname 00:06:37.825 06:43:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.825 06:43:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 391325 00:06:37.825 06:43:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.825 06:43:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.825 06:43:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 391325' 00:06:37.825 killing process with pid 391325 00:06:37.825 06:43:51 -- common/autotest_common.sh@945 -- # kill 391325 00:06:37.825 06:43:51 -- common/autotest_common.sh@950 -- # wait 391325 00:06:38.392 00:06:38.392 real 0m2.127s 00:06:38.392 user 0m2.638s 00:06:38.392 sys 0m0.567s 00:06:38.392 06:43:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.392 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.392 ************************************ 00:06:38.392 END TEST app_cmdline 00:06:38.392 ************************************ 00:06:38.392 06:43:52 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:38.392 06:43:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:38.392 06:43:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.392 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.392 ************************************ 00:06:38.392 START TEST version 00:06:38.392 ************************************ 00:06:38.392 06:43:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:38.392 * Looking for test storage... 00:06:38.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:38.392 06:43:52 -- app/version.sh@17 -- # get_header_version major 00:06:38.392 06:43:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.392 06:43:52 -- app/version.sh@14 -- # cut -f2 00:06:38.392 06:43:52 -- app/version.sh@14 -- # tr -d '"' 00:06:38.392 06:43:52 -- app/version.sh@17 -- # major=24 00:06:38.392 06:43:52 -- app/version.sh@18 -- # get_header_version minor 00:06:38.392 06:43:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.392 06:43:52 -- app/version.sh@14 -- # cut -f2 00:06:38.392 06:43:52 -- app/version.sh@14 -- # tr -d '"' 00:06:38.392 06:43:52 -- app/version.sh@18 -- # minor=1 00:06:38.392 06:43:52 -- app/version.sh@19 -- # get_header_version patch 00:06:38.392 06:43:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.392 06:43:52 -- app/version.sh@14 -- # cut -f2 00:06:38.392 06:43:52 -- app/version.sh@14 -- # tr -d '"' 00:06:38.392 06:43:52 -- app/version.sh@19 -- # patch=1 00:06:38.392 06:43:52 -- app/version.sh@20 -- # get_header_version suffix 00:06:38.392 06:43:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:38.392 06:43:52 -- app/version.sh@14 -- # cut -f2 00:06:38.392 06:43:52 -- app/version.sh@14 -- # tr -d '"' 00:06:38.392 06:43:52 -- app/version.sh@20 -- # suffix=-pre 00:06:38.392 06:43:52 -- app/version.sh@22 -- # version=24.1 00:06:38.392 06:43:52 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:38.392 06:43:52 -- app/version.sh@25 -- # version=24.1.1 00:06:38.392 06:43:52 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:38.392 06:43:52 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:38.392 06:43:52 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:38.392 06:43:52 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:38.392 06:43:52 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:38.392 00:06:38.392 real 0m0.103s 00:06:38.392 user 0m0.063s 00:06:38.392 sys 0m0.061s 00:06:38.392 06:43:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.392 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.392 ************************************ 00:06:38.392 END TEST version 00:06:38.392 ************************************ 00:06:38.392 06:43:52 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:38.392 06:43:52 -- spdk/autotest.sh@204 -- # uname -s 00:06:38.392 06:43:52 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:38.392 06:43:52 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:38.392 06:43:52 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:38.392 06:43:52 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:38.392 06:43:52 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:38.392 06:43:52 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:38.392 06:43:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:38.392 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.392 06:43:52 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:38.392 06:43:52 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:38.392 06:43:52 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:38.392 06:43:52 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:38.392 06:43:52 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:38.392 06:43:52 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:38.392 06:43:52 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:38.392 06:43:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:38.392 06:43:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.392 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.392 ************************************ 00:06:38.392 START TEST nvmf_tcp 00:06:38.392 ************************************ 00:06:38.392 06:43:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:38.392 * Looking for test storage... 00:06:38.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:38.392 06:43:52 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:38.392 06:43:52 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:38.392 06:43:52 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.392 06:43:52 -- nvmf/common.sh@7 -- # uname -s 00:06:38.392 06:43:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.392 06:43:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.392 06:43:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.392 06:43:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.392 06:43:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.392 06:43:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.392 06:43:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.392 06:43:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.392 06:43:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.392 06:43:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.392 06:43:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.392 06:43:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.392 06:43:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.392 06:43:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.392 06:43:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.392 06:43:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.392 06:43:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.392 06:43:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.392 06:43:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.392 06:43:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.393 06:43:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.393 06:43:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.393 06:43:52 -- paths/export.sh@5 -- # export PATH 00:06:38.393 06:43:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.393 06:43:52 -- nvmf/common.sh@46 -- # : 0 00:06:38.393 06:43:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:38.393 06:43:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:38.393 06:43:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:38.393 06:43:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.393 06:43:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.393 06:43:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:38.393 06:43:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:38.393 06:43:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:38.393 06:43:52 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:38.393 06:43:52 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:38.393 06:43:52 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:38.393 06:43:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:38.393 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.678 06:43:52 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:38.678 06:43:52 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:38.678 06:43:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:38.678 06:43:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.678 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.678 ************************************ 00:06:38.678 START TEST nvmf_example 00:06:38.678 ************************************ 00:06:38.678 06:43:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:38.678 * Looking for test storage... 00:06:38.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.678 06:43:52 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.678 06:43:52 -- nvmf/common.sh@7 -- # uname -s 00:06:38.678 06:43:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.678 06:43:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.678 06:43:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.678 06:43:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.678 06:43:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.678 06:43:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.678 06:43:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.678 06:43:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.678 06:43:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.678 06:43:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.678 06:43:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.678 06:43:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.678 06:43:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.678 06:43:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.678 06:43:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.678 06:43:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.678 06:43:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.678 06:43:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.678 06:43:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.678 06:43:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.678 06:43:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.678 06:43:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.678 06:43:52 -- paths/export.sh@5 -- # export PATH 00:06:38.679 06:43:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.679 06:43:52 -- nvmf/common.sh@46 -- # : 0 00:06:38.679 06:43:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:38.679 06:43:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:38.679 06:43:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:38.679 06:43:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.679 06:43:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.679 06:43:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:38.679 06:43:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:38.679 06:43:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:38.679 06:43:52 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:38.679 06:43:52 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:38.679 06:43:52 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:38.679 06:43:52 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:38.679 06:43:52 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:38.679 06:43:52 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:38.679 06:43:52 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:38.679 06:43:52 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:38.679 06:43:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:38.679 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:38.679 06:43:52 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:38.679 06:43:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:38.679 06:43:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.679 06:43:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:38.679 06:43:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:38.679 06:43:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:38.679 06:43:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.679 06:43:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.679 06:43:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.679 06:43:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:38.679 06:43:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:38.679 06:43:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:38.679 06:43:52 -- common/autotest_common.sh@10 -- # set +x 00:06:41.207 06:43:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:41.207 06:43:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:41.207 06:43:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:41.207 06:43:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:41.207 06:43:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:41.207 06:43:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:41.207 06:43:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:41.207 06:43:55 -- nvmf/common.sh@294 -- # net_devs=() 00:06:41.207 06:43:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:41.207 06:43:55 -- nvmf/common.sh@295 -- # e810=() 00:06:41.207 06:43:55 -- nvmf/common.sh@295 -- # local -ga e810 00:06:41.207 06:43:55 -- nvmf/common.sh@296 -- # x722=() 00:06:41.207 06:43:55 -- nvmf/common.sh@296 -- # local -ga x722 00:06:41.207 06:43:55 -- nvmf/common.sh@297 -- # mlx=() 00:06:41.207 06:43:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:41.207 06:43:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:41.207 06:43:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:41.207 06:43:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:41.208 06:43:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:41.208 06:43:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:41.208 06:43:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:41.208 06:43:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:41.208 06:43:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:41.208 06:43:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:41.208 06:43:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:41.208 06:43:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:41.208 06:43:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:41.208 06:43:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:41.208 06:43:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:41.208 06:43:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:41.208 06:43:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:41.208 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:41.208 06:43:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:41.208 06:43:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:41.208 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:41.208 06:43:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:41.208 06:43:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:41.208 06:43:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.208 06:43:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:41.208 06:43:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.208 06:43:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:41.208 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:41.208 06:43:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.208 06:43:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:41.208 06:43:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.208 06:43:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:41.208 06:43:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.208 06:43:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:41.208 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:41.208 06:43:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.208 06:43:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:41.208 06:43:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:41.208 06:43:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:41.208 06:43:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.208 06:43:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.208 06:43:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:41.208 06:43:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:41.208 06:43:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:41.208 06:43:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:41.208 06:43:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:41.208 06:43:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:41.208 06:43:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.208 06:43:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:41.208 06:43:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:41.208 06:43:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:41.208 06:43:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:41.208 06:43:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:41.208 06:43:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:41.208 06:43:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:41.208 06:43:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:41.208 06:43:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:41.208 06:43:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:41.208 06:43:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:41.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:06:41.208 00:06:41.208 --- 10.0.0.2 ping statistics --- 00:06:41.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.208 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:06:41.208 06:43:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:41.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:06:41.208 00:06:41.208 --- 10.0.0.1 ping statistics --- 00:06:41.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.208 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:06:41.208 06:43:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.208 06:43:55 -- nvmf/common.sh@410 -- # return 0 00:06:41.208 06:43:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:41.208 06:43:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.208 06:43:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:41.208 06:43:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.208 06:43:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:41.208 06:43:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:41.208 06:43:55 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:41.208 06:43:55 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:41.208 06:43:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:41.208 06:43:55 -- common/autotest_common.sh@10 -- # set +x 00:06:41.208 06:43:55 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:41.208 06:43:55 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:41.208 06:43:55 -- target/nvmf_example.sh@34 -- # nvmfpid=393655 00:06:41.208 06:43:55 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:41.208 06:43:55 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:41.208 06:43:55 -- target/nvmf_example.sh@36 -- # waitforlisten 393655 00:06:41.208 06:43:55 -- common/autotest_common.sh@819 -- # '[' -z 393655 ']' 00:06:41.208 06:43:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.208 06:43:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.208 06:43:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.208 06:43:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.208 06:43:55 -- common/autotest_common.sh@10 -- # set +x 00:06:41.466 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.399 06:43:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:42.399 06:43:56 -- common/autotest_common.sh@852 -- # return 0 00:06:42.399 06:43:56 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:42.399 06:43:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:42.399 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.399 06:43:56 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:42.399 06:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.399 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.400 06:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.400 06:43:56 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:42.400 06:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.400 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.400 06:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.400 06:43:56 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:42.400 06:43:56 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:42.400 06:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.400 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.400 06:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.400 06:43:56 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:42.400 06:43:56 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:42.400 06:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.400 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.400 06:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.400 06:43:56 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:42.400 06:43:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:42.400 06:43:56 -- common/autotest_common.sh@10 -- # set +x 00:06:42.400 06:43:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:42.400 06:43:56 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:42.400 06:43:56 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:42.400 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.595 Initializing NVMe Controllers 00:06:54.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:54.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:54.595 Initialization complete. Launching workers. 00:06:54.595 ======================================================== 00:06:54.595 Latency(us) 00:06:54.595 Device Information : IOPS MiB/s Average min max 00:06:54.595 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15191.50 59.34 4214.00 882.31 16080.53 00:06:54.595 ======================================================== 00:06:54.595 Total : 15191.50 59.34 4214.00 882.31 16080.53 00:06:54.595 00:06:54.595 06:44:06 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:54.595 06:44:06 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:54.595 06:44:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:54.595 06:44:06 -- nvmf/common.sh@116 -- # sync 00:06:54.595 06:44:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:54.595 06:44:06 -- nvmf/common.sh@119 -- # set +e 00:06:54.595 06:44:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:54.595 06:44:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:54.595 rmmod nvme_tcp 00:06:54.595 rmmod nvme_fabrics 00:06:54.595 rmmod nvme_keyring 00:06:54.595 06:44:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:54.595 06:44:06 -- nvmf/common.sh@123 -- # set -e 00:06:54.595 06:44:06 -- nvmf/common.sh@124 -- # return 0 00:06:54.595 06:44:06 -- nvmf/common.sh@477 -- # '[' -n 393655 ']' 00:06:54.595 06:44:06 -- nvmf/common.sh@478 -- # killprocess 393655 00:06:54.595 06:44:06 -- common/autotest_common.sh@926 -- # '[' -z 393655 ']' 00:06:54.595 06:44:06 -- common/autotest_common.sh@930 -- # kill -0 393655 00:06:54.595 06:44:06 -- common/autotest_common.sh@931 -- # uname 00:06:54.595 06:44:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:54.595 06:44:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 393655 00:06:54.595 06:44:06 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:06:54.595 06:44:06 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:06:54.595 06:44:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 393655' 00:06:54.595 killing process with pid 393655 00:06:54.595 06:44:06 -- common/autotest_common.sh@945 -- # kill 393655 00:06:54.595 06:44:06 -- common/autotest_common.sh@950 -- # wait 393655 00:06:54.595 nvmf threads initialize successfully 00:06:54.595 bdev subsystem init successfully 00:06:54.595 created a nvmf target service 00:06:54.595 create targets's poll groups done 00:06:54.595 all subsystems of target started 00:06:54.595 nvmf target is running 00:06:54.595 all subsystems of target stopped 00:06:54.595 destroy targets's poll groups done 00:06:54.595 destroyed the nvmf target service 00:06:54.595 bdev subsystem finish successfully 00:06:54.595 nvmf threads destroy successfully 00:06:54.595 06:44:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:54.595 06:44:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:54.595 06:44:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:54.595 06:44:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:54.595 06:44:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:54.595 06:44:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.595 06:44:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.595 06:44:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.167 06:44:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:06:55.167 06:44:09 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:55.167 06:44:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:55.167 06:44:09 -- common/autotest_common.sh@10 -- # set +x 00:06:55.167 00:06:55.167 real 0m16.541s 00:06:55.167 user 0m45.589s 00:06:55.167 sys 0m3.691s 00:06:55.167 06:44:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.167 06:44:09 -- common/autotest_common.sh@10 -- # set +x 00:06:55.167 ************************************ 00:06:55.167 END TEST nvmf_example 00:06:55.167 ************************************ 00:06:55.167 06:44:09 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:55.167 06:44:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:55.167 06:44:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.167 06:44:09 -- common/autotest_common.sh@10 -- # set +x 00:06:55.167 ************************************ 00:06:55.167 START TEST nvmf_filesystem 00:06:55.167 ************************************ 00:06:55.167 06:44:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:55.167 * Looking for test storage... 00:06:55.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.167 06:44:09 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:55.167 06:44:09 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:55.167 06:44:09 -- common/autotest_common.sh@34 -- # set -e 00:06:55.167 06:44:09 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:55.167 06:44:09 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:55.167 06:44:09 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:55.167 06:44:09 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:55.167 06:44:09 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:55.167 06:44:09 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:55.167 06:44:09 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:55.167 06:44:09 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:55.167 06:44:09 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:55.167 06:44:09 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:55.167 06:44:09 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:55.167 06:44:09 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:55.167 06:44:09 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:55.167 06:44:09 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:55.167 06:44:09 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:55.167 06:44:09 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:55.167 06:44:09 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:55.167 06:44:09 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:55.167 06:44:09 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:55.167 06:44:09 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:55.167 06:44:09 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:55.167 06:44:09 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:55.167 06:44:09 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:55.167 06:44:09 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:55.167 06:44:09 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:55.167 06:44:09 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:55.167 06:44:09 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:55.167 06:44:09 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:55.167 06:44:09 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:55.167 06:44:09 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:55.167 06:44:09 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:55.167 06:44:09 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:55.167 06:44:09 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:55.167 06:44:09 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:55.167 06:44:09 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:55.167 06:44:09 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:55.167 06:44:09 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:55.167 06:44:09 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:55.167 06:44:09 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:55.167 06:44:09 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:55.167 06:44:09 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:55.167 06:44:09 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:55.167 06:44:09 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:55.167 06:44:09 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:55.167 06:44:09 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:55.167 06:44:09 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:55.167 06:44:09 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:55.167 06:44:09 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:55.167 06:44:09 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:55.167 06:44:09 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:55.167 06:44:09 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:55.167 06:44:09 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:55.167 06:44:09 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:55.167 06:44:09 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:55.167 06:44:09 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:55.167 06:44:09 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:55.167 06:44:09 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:55.167 06:44:09 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:55.167 06:44:09 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:55.167 06:44:09 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:55.167 06:44:09 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:55.167 06:44:09 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:55.167 06:44:09 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:55.167 06:44:09 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:06:55.167 06:44:09 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:55.167 06:44:09 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:55.167 06:44:09 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:55.167 06:44:09 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:55.167 06:44:09 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:55.167 06:44:09 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:55.167 06:44:09 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:55.167 06:44:09 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:55.167 06:44:09 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:55.167 06:44:09 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:55.167 06:44:09 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:55.167 06:44:09 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:55.167 06:44:09 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:55.167 06:44:09 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:55.167 06:44:09 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:55.167 06:44:09 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:55.167 06:44:09 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:55.167 06:44:09 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:55.167 06:44:09 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:55.167 06:44:09 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:55.167 06:44:09 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:55.167 06:44:09 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:55.167 06:44:09 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:55.167 06:44:09 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:55.167 06:44:09 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:55.167 06:44:09 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:55.167 06:44:09 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:55.167 06:44:09 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:55.167 06:44:09 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:55.167 06:44:09 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:55.168 06:44:09 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:55.168 06:44:09 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:55.168 06:44:09 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:55.168 06:44:09 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:55.168 06:44:09 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:55.168 #define SPDK_CONFIG_H 00:06:55.168 #define SPDK_CONFIG_APPS 1 00:06:55.168 #define SPDK_CONFIG_ARCH native 00:06:55.168 #undef SPDK_CONFIG_ASAN 00:06:55.168 #undef SPDK_CONFIG_AVAHI 00:06:55.168 #undef SPDK_CONFIG_CET 00:06:55.168 #define SPDK_CONFIG_COVERAGE 1 00:06:55.168 #define SPDK_CONFIG_CROSS_PREFIX 00:06:55.168 #undef SPDK_CONFIG_CRYPTO 00:06:55.168 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:55.168 #undef SPDK_CONFIG_CUSTOMOCF 00:06:55.168 #undef SPDK_CONFIG_DAOS 00:06:55.168 #define SPDK_CONFIG_DAOS_DIR 00:06:55.168 #define SPDK_CONFIG_DEBUG 1 00:06:55.168 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:55.168 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:55.168 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:55.168 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:55.168 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:55.168 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:55.168 #define SPDK_CONFIG_EXAMPLES 1 00:06:55.168 #undef SPDK_CONFIG_FC 00:06:55.168 #define SPDK_CONFIG_FC_PATH 00:06:55.168 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:55.168 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:55.168 #undef SPDK_CONFIG_FUSE 00:06:55.168 #undef SPDK_CONFIG_FUZZER 00:06:55.168 #define SPDK_CONFIG_FUZZER_LIB 00:06:55.168 #undef SPDK_CONFIG_GOLANG 00:06:55.168 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:55.168 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:55.168 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:55.168 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:55.168 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:55.168 #define SPDK_CONFIG_IDXD 1 00:06:55.168 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:55.168 #undef SPDK_CONFIG_IPSEC_MB 00:06:55.168 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:55.168 #define SPDK_CONFIG_ISAL 1 00:06:55.168 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:55.168 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:55.168 #define SPDK_CONFIG_LIBDIR 00:06:55.168 #undef SPDK_CONFIG_LTO 00:06:55.168 #define SPDK_CONFIG_MAX_LCORES 00:06:55.168 #define SPDK_CONFIG_NVME_CUSE 1 00:06:55.168 #undef SPDK_CONFIG_OCF 00:06:55.168 #define SPDK_CONFIG_OCF_PATH 00:06:55.168 #define SPDK_CONFIG_OPENSSL_PATH 00:06:55.168 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:55.168 #undef SPDK_CONFIG_PGO_USE 00:06:55.168 #define SPDK_CONFIG_PREFIX /usr/local 00:06:55.168 #undef SPDK_CONFIG_RAID5F 00:06:55.168 #undef SPDK_CONFIG_RBD 00:06:55.168 #define SPDK_CONFIG_RDMA 1 00:06:55.168 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:55.168 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:55.168 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:55.168 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:55.168 #define SPDK_CONFIG_SHARED 1 00:06:55.168 #undef SPDK_CONFIG_SMA 00:06:55.168 #define SPDK_CONFIG_TESTS 1 00:06:55.168 #undef SPDK_CONFIG_TSAN 00:06:55.168 #define SPDK_CONFIG_UBLK 1 00:06:55.168 #define SPDK_CONFIG_UBSAN 1 00:06:55.168 #undef SPDK_CONFIG_UNIT_TESTS 00:06:55.168 #undef SPDK_CONFIG_URING 00:06:55.168 #define SPDK_CONFIG_URING_PATH 00:06:55.168 #undef SPDK_CONFIG_URING_ZNS 00:06:55.168 #undef SPDK_CONFIG_USDT 00:06:55.168 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:55.168 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:55.168 #undef SPDK_CONFIG_VFIO_USER 00:06:55.168 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:55.168 #define SPDK_CONFIG_VHOST 1 00:06:55.168 #define SPDK_CONFIG_VIRTIO 1 00:06:55.168 #undef SPDK_CONFIG_VTUNE 00:06:55.168 #define SPDK_CONFIG_VTUNE_DIR 00:06:55.168 #define SPDK_CONFIG_WERROR 1 00:06:55.168 #define SPDK_CONFIG_WPDK_DIR 00:06:55.168 #undef SPDK_CONFIG_XNVME 00:06:55.168 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:55.168 06:44:09 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:55.168 06:44:09 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.168 06:44:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.168 06:44:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.168 06:44:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.168 06:44:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.168 06:44:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.168 06:44:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.168 06:44:09 -- paths/export.sh@5 -- # export PATH 00:06:55.168 06:44:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.168 06:44:09 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:55.168 06:44:09 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:55.168 06:44:09 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:55.168 06:44:09 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:55.168 06:44:09 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:55.168 06:44:09 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:55.168 06:44:09 -- pm/common@16 -- # TEST_TAG=N/A 00:06:55.168 06:44:09 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:55.168 06:44:09 -- common/autotest_common.sh@52 -- # : 1 00:06:55.168 06:44:09 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:55.168 06:44:09 -- common/autotest_common.sh@56 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:55.168 06:44:09 -- common/autotest_common.sh@58 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:55.168 06:44:09 -- common/autotest_common.sh@60 -- # : 1 00:06:55.168 06:44:09 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:55.168 06:44:09 -- common/autotest_common.sh@62 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:55.168 06:44:09 -- common/autotest_common.sh@64 -- # : 00:06:55.168 06:44:09 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:55.168 06:44:09 -- common/autotest_common.sh@66 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:55.168 06:44:09 -- common/autotest_common.sh@68 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:55.168 06:44:09 -- common/autotest_common.sh@70 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:55.168 06:44:09 -- common/autotest_common.sh@72 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:55.168 06:44:09 -- common/autotest_common.sh@74 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:55.168 06:44:09 -- common/autotest_common.sh@76 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:55.168 06:44:09 -- common/autotest_common.sh@78 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:55.168 06:44:09 -- common/autotest_common.sh@80 -- # : 1 00:06:55.168 06:44:09 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:55.168 06:44:09 -- common/autotest_common.sh@82 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:55.168 06:44:09 -- common/autotest_common.sh@84 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:55.168 06:44:09 -- common/autotest_common.sh@86 -- # : 1 00:06:55.168 06:44:09 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:55.168 06:44:09 -- common/autotest_common.sh@88 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:55.168 06:44:09 -- common/autotest_common.sh@90 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:55.168 06:44:09 -- common/autotest_common.sh@92 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:55.168 06:44:09 -- common/autotest_common.sh@94 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:55.168 06:44:09 -- common/autotest_common.sh@96 -- # : tcp 00:06:55.168 06:44:09 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:55.168 06:44:09 -- common/autotest_common.sh@98 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:55.168 06:44:09 -- common/autotest_common.sh@100 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:55.168 06:44:09 -- common/autotest_common.sh@102 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:55.168 06:44:09 -- common/autotest_common.sh@104 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:55.168 06:44:09 -- common/autotest_common.sh@106 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:55.168 06:44:09 -- common/autotest_common.sh@108 -- # : 0 00:06:55.168 06:44:09 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:55.168 06:44:09 -- common/autotest_common.sh@110 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:55.169 06:44:09 -- common/autotest_common.sh@112 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:55.169 06:44:09 -- common/autotest_common.sh@114 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:55.169 06:44:09 -- common/autotest_common.sh@116 -- # : 1 00:06:55.169 06:44:09 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:55.169 06:44:09 -- common/autotest_common.sh@118 -- # : 00:06:55.169 06:44:09 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:55.169 06:44:09 -- common/autotest_common.sh@120 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:55.169 06:44:09 -- common/autotest_common.sh@122 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:55.169 06:44:09 -- common/autotest_common.sh@124 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:55.169 06:44:09 -- common/autotest_common.sh@126 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:55.169 06:44:09 -- common/autotest_common.sh@128 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:55.169 06:44:09 -- common/autotest_common.sh@130 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:55.169 06:44:09 -- common/autotest_common.sh@132 -- # : 00:06:55.169 06:44:09 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:55.169 06:44:09 -- common/autotest_common.sh@134 -- # : true 00:06:55.169 06:44:09 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:55.169 06:44:09 -- common/autotest_common.sh@136 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:55.169 06:44:09 -- common/autotest_common.sh@138 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:55.169 06:44:09 -- common/autotest_common.sh@140 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:55.169 06:44:09 -- common/autotest_common.sh@142 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:55.169 06:44:09 -- common/autotest_common.sh@144 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:55.169 06:44:09 -- common/autotest_common.sh@146 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:55.169 06:44:09 -- common/autotest_common.sh@148 -- # : e810 00:06:55.169 06:44:09 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:55.169 06:44:09 -- common/autotest_common.sh@150 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:55.169 06:44:09 -- common/autotest_common.sh@152 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:55.169 06:44:09 -- common/autotest_common.sh@154 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:55.169 06:44:09 -- common/autotest_common.sh@156 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:55.169 06:44:09 -- common/autotest_common.sh@158 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:55.169 06:44:09 -- common/autotest_common.sh@160 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:55.169 06:44:09 -- common/autotest_common.sh@163 -- # : 00:06:55.169 06:44:09 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:55.169 06:44:09 -- common/autotest_common.sh@165 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:55.169 06:44:09 -- common/autotest_common.sh@167 -- # : 0 00:06:55.169 06:44:09 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:55.169 06:44:09 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:55.169 06:44:09 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:55.169 06:44:09 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:55.169 06:44:09 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:55.169 06:44:09 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.169 06:44:09 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.169 06:44:09 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.169 06:44:09 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:55.169 06:44:09 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:55.169 06:44:09 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:55.169 06:44:09 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:55.169 06:44:09 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:55.169 06:44:09 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:55.169 06:44:09 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:55.169 06:44:09 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:55.169 06:44:09 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:55.169 06:44:09 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:55.169 06:44:09 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:55.169 06:44:09 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:55.169 06:44:09 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:55.169 06:44:09 -- common/autotest_common.sh@196 -- # cat 00:06:55.169 06:44:09 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:55.169 06:44:09 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:55.169 06:44:09 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:55.169 06:44:09 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:55.169 06:44:09 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:55.169 06:44:09 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:55.169 06:44:09 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:55.169 06:44:09 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:55.169 06:44:09 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:55.169 06:44:09 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:55.169 06:44:09 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:55.169 06:44:09 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:55.169 06:44:09 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:55.169 06:44:09 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:55.169 06:44:09 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:55.169 06:44:09 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:55.169 06:44:09 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:55.169 06:44:09 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:55.169 06:44:09 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:55.169 06:44:09 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:06:55.169 06:44:09 -- common/autotest_common.sh@249 -- # export valgrind= 00:06:55.169 06:44:09 -- common/autotest_common.sh@249 -- # valgrind= 00:06:55.169 06:44:09 -- common/autotest_common.sh@255 -- # uname -s 00:06:55.169 06:44:09 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:06:55.169 06:44:09 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:06:55.169 06:44:09 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:06:55.169 06:44:09 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:06:55.169 06:44:09 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:55.169 06:44:09 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:55.169 06:44:09 -- common/autotest_common.sh@265 -- # MAKE=make 00:06:55.169 06:44:09 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:06:55.169 06:44:09 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:06:55.169 06:44:09 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:06:55.169 06:44:09 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:55.169 06:44:09 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:06:55.169 06:44:09 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:06:55.169 06:44:09 -- common/autotest_common.sh@291 -- # for i in "$@" 00:06:55.169 06:44:09 -- common/autotest_common.sh@292 -- # case "$i" in 00:06:55.169 06:44:09 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:06:55.169 06:44:09 -- common/autotest_common.sh@309 -- # [[ -z 395413 ]] 00:06:55.169 06:44:09 -- common/autotest_common.sh@309 -- # kill -0 395413 00:06:55.170 06:44:09 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:06:55.170 06:44:09 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:06:55.170 06:44:09 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:06:55.170 06:44:09 -- common/autotest_common.sh@322 -- # local mount target_dir 00:06:55.170 06:44:09 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:06:55.170 06:44:09 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:06:55.170 06:44:09 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:06:55.170 06:44:09 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:06:55.170 06:44:09 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.LGSv5U 00:06:55.170 06:44:09 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:55.170 06:44:09 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:06:55.170 06:44:09 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:06:55.170 06:44:09 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.LGSv5U/tests/target /tmp/spdk.LGSv5U 00:06:55.170 06:44:09 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:06:55.170 06:44:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:55.170 06:44:09 -- common/autotest_common.sh@318 -- # df -T 00:06:55.170 06:44:09 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:06:55.170 06:44:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:06:55.170 06:44:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=968667136 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:06:55.170 06:44:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=4315762688 00:06:55.170 06:44:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=48489328640 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994729472 00:06:55.170 06:44:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=13505400832 00:06:55.170 06:44:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=30943846400 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997364736 00:06:55.170 06:44:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:06:55.170 06:44:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=12389986304 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398948352 00:06:55.170 06:44:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=8962048 00:06:55.170 06:44:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=30995628032 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997364736 00:06:55.170 06:44:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=1736704 00:06:55.170 06:44:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:55.170 06:44:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199468032 00:06:55.170 06:44:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199472128 00:06:55.170 06:44:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:06:55.170 06:44:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:55.170 06:44:09 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:06:55.170 * Looking for test storage... 00:06:55.170 06:44:09 -- common/autotest_common.sh@359 -- # local target_space new_size 00:06:55.170 06:44:09 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:06:55.170 06:44:09 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.170 06:44:09 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:55.170 06:44:09 -- common/autotest_common.sh@363 -- # mount=/ 00:06:55.170 06:44:09 -- common/autotest_common.sh@365 -- # target_space=48489328640 00:06:55.170 06:44:09 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:06:55.170 06:44:09 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:06:55.170 06:44:09 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:06:55.170 06:44:09 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:06:55.170 06:44:09 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:06:55.170 06:44:09 -- common/autotest_common.sh@372 -- # new_size=15719993344 00:06:55.170 06:44:09 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:55.170 06:44:09 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.170 06:44:09 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.170 06:44:09 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.170 06:44:09 -- common/autotest_common.sh@380 -- # return 0 00:06:55.170 06:44:09 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:06:55.170 06:44:09 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:06:55.170 06:44:09 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:55.170 06:44:09 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:55.170 06:44:09 -- common/autotest_common.sh@1672 -- # true 00:06:55.170 06:44:09 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:55.170 06:44:09 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:55.170 06:44:09 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:55.170 06:44:09 -- common/autotest_common.sh@27 -- # exec 00:06:55.170 06:44:09 -- common/autotest_common.sh@29 -- # exec 00:06:55.170 06:44:09 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:55.170 06:44:09 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:55.170 06:44:09 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:55.170 06:44:09 -- common/autotest_common.sh@18 -- # set -x 00:06:55.170 06:44:09 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.170 06:44:09 -- nvmf/common.sh@7 -- # uname -s 00:06:55.170 06:44:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.170 06:44:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.170 06:44:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.170 06:44:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.170 06:44:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.170 06:44:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.170 06:44:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.170 06:44:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.170 06:44:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.170 06:44:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.170 06:44:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.170 06:44:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.170 06:44:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.170 06:44:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.170 06:44:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.170 06:44:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.170 06:44:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.170 06:44:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.170 06:44:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.170 06:44:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.170 06:44:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.170 06:44:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.170 06:44:09 -- paths/export.sh@5 -- # export PATH 00:06:55.170 06:44:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.170 06:44:09 -- nvmf/common.sh@46 -- # : 0 00:06:55.170 06:44:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:55.170 06:44:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:55.170 06:44:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:55.170 06:44:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.171 06:44:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.171 06:44:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:55.171 06:44:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:55.171 06:44:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:55.171 06:44:09 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:55.171 06:44:09 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:55.171 06:44:09 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:55.171 06:44:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:55.171 06:44:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.171 06:44:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:55.171 06:44:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:55.171 06:44:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:55.171 06:44:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.171 06:44:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.171 06:44:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.171 06:44:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:55.171 06:44:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:55.171 06:44:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:55.171 06:44:09 -- common/autotest_common.sh@10 -- # set +x 00:06:57.702 06:44:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:57.702 06:44:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:57.702 06:44:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:57.702 06:44:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:57.702 06:44:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:57.702 06:44:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:57.702 06:44:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:57.702 06:44:11 -- nvmf/common.sh@294 -- # net_devs=() 00:06:57.702 06:44:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:57.702 06:44:11 -- nvmf/common.sh@295 -- # e810=() 00:06:57.702 06:44:11 -- nvmf/common.sh@295 -- # local -ga e810 00:06:57.702 06:44:11 -- nvmf/common.sh@296 -- # x722=() 00:06:57.702 06:44:11 -- nvmf/common.sh@296 -- # local -ga x722 00:06:57.702 06:44:11 -- nvmf/common.sh@297 -- # mlx=() 00:06:57.702 06:44:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:57.702 06:44:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:57.702 06:44:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:57.702 06:44:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:57.702 06:44:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:57.702 06:44:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:57.702 06:44:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:57.702 06:44:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:57.702 06:44:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:57.702 06:44:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:57.702 06:44:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:57.702 06:44:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:57.702 06:44:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:57.702 06:44:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:57.702 06:44:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:57.702 06:44:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:57.702 06:44:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:57.702 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:57.702 06:44:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:57.702 06:44:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:57.702 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:57.702 06:44:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:57.702 06:44:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:57.702 06:44:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.702 06:44:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:57.702 06:44:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.702 06:44:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:57.702 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:57.702 06:44:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.702 06:44:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:57.702 06:44:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.702 06:44:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:57.702 06:44:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.702 06:44:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:57.702 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:57.702 06:44:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.702 06:44:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:57.702 06:44:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:57.702 06:44:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:57.702 06:44:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:57.702 06:44:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:57.702 06:44:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:57.702 06:44:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:57.702 06:44:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:57.702 06:44:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:57.702 06:44:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:57.702 06:44:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:57.702 06:44:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:57.702 06:44:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:57.702 06:44:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:57.702 06:44:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:57.702 06:44:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:57.702 06:44:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:57.702 06:44:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:57.702 06:44:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:57.702 06:44:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:57.702 06:44:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:57.702 06:44:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:57.702 06:44:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:57.702 06:44:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:57.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:57.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:06:57.702 00:06:57.702 --- 10.0.0.2 ping statistics --- 00:06:57.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.702 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:06:57.702 06:44:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:57.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:57.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:06:57.702 00:06:57.702 --- 10.0.0.1 ping statistics --- 00:06:57.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.702 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:06:57.702 06:44:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:57.702 06:44:11 -- nvmf/common.sh@410 -- # return 0 00:06:57.703 06:44:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:57.703 06:44:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:57.703 06:44:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:57.703 06:44:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:57.703 06:44:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:57.703 06:44:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:57.703 06:44:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:57.703 06:44:11 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:57.703 06:44:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:57.703 06:44:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.703 06:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:57.703 ************************************ 00:06:57.703 START TEST nvmf_filesystem_no_in_capsule 00:06:57.703 ************************************ 00:06:57.703 06:44:11 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:06:57.703 06:44:11 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:57.703 06:44:11 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:57.703 06:44:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:57.703 06:44:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:57.703 06:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:57.703 06:44:11 -- nvmf/common.sh@469 -- # nvmfpid=397448 00:06:57.703 06:44:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:57.703 06:44:11 -- nvmf/common.sh@470 -- # waitforlisten 397448 00:06:57.703 06:44:11 -- common/autotest_common.sh@819 -- # '[' -z 397448 ']' 00:06:57.703 06:44:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.703 06:44:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:57.703 06:44:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.703 06:44:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:57.703 06:44:11 -- common/autotest_common.sh@10 -- # set +x 00:06:57.961 [2024-05-15 06:44:11.943783] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:57.961 [2024-05-15 06:44:11.943850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.961 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.961 [2024-05-15 06:44:12.020450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.961 [2024-05-15 06:44:12.133798] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:57.961 [2024-05-15 06:44:12.133954] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:57.961 [2024-05-15 06:44:12.133972] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:57.961 [2024-05-15 06:44:12.133984] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:57.961 [2024-05-15 06:44:12.134034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.961 [2024-05-15 06:44:12.134091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.961 [2024-05-15 06:44:12.134159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.961 [2024-05-15 06:44:12.134162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.894 06:44:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:58.894 06:44:12 -- common/autotest_common.sh@852 -- # return 0 00:06:58.894 06:44:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:58.894 06:44:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:58.894 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.894 06:44:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.894 06:44:12 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:58.894 06:44:12 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:58.894 06:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.894 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.894 [2024-05-15 06:44:12.952580] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.894 06:44:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.894 06:44:12 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:58.894 06:44:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.894 06:44:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.894 Malloc1 00:06:58.894 06:44:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.894 06:44:13 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:58.894 06:44:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.894 06:44:13 -- common/autotest_common.sh@10 -- # set +x 00:06:58.894 06:44:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.894 06:44:13 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:58.895 06:44:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.895 06:44:13 -- common/autotest_common.sh@10 -- # set +x 00:06:58.895 06:44:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.895 06:44:13 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:58.895 06:44:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.895 06:44:13 -- common/autotest_common.sh@10 -- # set +x 00:06:59.152 [2024-05-15 06:44:13.132546] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.152 06:44:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.152 06:44:13 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:59.152 06:44:13 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:59.152 06:44:13 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:59.152 06:44:13 -- common/autotest_common.sh@1359 -- # local bs 00:06:59.152 06:44:13 -- common/autotest_common.sh@1360 -- # local nb 00:06:59.152 06:44:13 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:59.152 06:44:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.152 06:44:13 -- common/autotest_common.sh@10 -- # set +x 00:06:59.152 06:44:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.152 06:44:13 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:59.152 { 00:06:59.152 "name": "Malloc1", 00:06:59.152 "aliases": [ 00:06:59.152 "72d5d981-54fe-468f-96fe-307a32845e45" 00:06:59.152 ], 00:06:59.152 "product_name": "Malloc disk", 00:06:59.152 "block_size": 512, 00:06:59.152 "num_blocks": 1048576, 00:06:59.152 "uuid": "72d5d981-54fe-468f-96fe-307a32845e45", 00:06:59.152 "assigned_rate_limits": { 00:06:59.152 "rw_ios_per_sec": 0, 00:06:59.152 "rw_mbytes_per_sec": 0, 00:06:59.152 "r_mbytes_per_sec": 0, 00:06:59.152 "w_mbytes_per_sec": 0 00:06:59.152 }, 00:06:59.152 "claimed": true, 00:06:59.152 "claim_type": "exclusive_write", 00:06:59.152 "zoned": false, 00:06:59.152 "supported_io_types": { 00:06:59.152 "read": true, 00:06:59.152 "write": true, 00:06:59.152 "unmap": true, 00:06:59.152 "write_zeroes": true, 00:06:59.152 "flush": true, 00:06:59.152 "reset": true, 00:06:59.152 "compare": false, 00:06:59.152 "compare_and_write": false, 00:06:59.152 "abort": true, 00:06:59.152 "nvme_admin": false, 00:06:59.152 "nvme_io": false 00:06:59.152 }, 00:06:59.152 "memory_domains": [ 00:06:59.152 { 00:06:59.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.152 "dma_device_type": 2 00:06:59.152 } 00:06:59.152 ], 00:06:59.152 "driver_specific": {} 00:06:59.152 } 00:06:59.152 ]' 00:06:59.152 06:44:13 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:59.152 06:44:13 -- common/autotest_common.sh@1362 -- # bs=512 00:06:59.152 06:44:13 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:59.152 06:44:13 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:59.152 06:44:13 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:59.152 06:44:13 -- common/autotest_common.sh@1367 -- # echo 512 00:06:59.152 06:44:13 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:59.152 06:44:13 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.718 06:44:13 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:59.718 06:44:13 -- common/autotest_common.sh@1177 -- # local i=0 00:06:59.718 06:44:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:59.718 06:44:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:59.718 06:44:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:01.615 06:44:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:01.615 06:44:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:01.615 06:44:15 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:01.615 06:44:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:01.615 06:44:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:01.615 06:44:15 -- common/autotest_common.sh@1187 -- # return 0 00:07:01.615 06:44:15 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:01.615 06:44:15 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:01.615 06:44:15 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:01.615 06:44:15 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:01.615 06:44:15 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:01.615 06:44:15 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:01.615 06:44:15 -- setup/common.sh@80 -- # echo 536870912 00:07:01.615 06:44:15 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:01.615 06:44:15 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:01.615 06:44:15 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:01.615 06:44:15 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:02.180 06:44:16 -- target/filesystem.sh@69 -- # partprobe 00:07:02.438 06:44:16 -- target/filesystem.sh@70 -- # sleep 1 00:07:03.432 06:44:17 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:03.432 06:44:17 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:03.432 06:44:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:03.432 06:44:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.432 06:44:17 -- common/autotest_common.sh@10 -- # set +x 00:07:03.432 ************************************ 00:07:03.432 START TEST filesystem_ext4 00:07:03.432 ************************************ 00:07:03.432 06:44:17 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:03.432 06:44:17 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:03.432 06:44:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:03.432 06:44:17 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:03.432 06:44:17 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:03.432 06:44:17 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:03.432 06:44:17 -- common/autotest_common.sh@904 -- # local i=0 00:07:03.433 06:44:17 -- common/autotest_common.sh@905 -- # local force 00:07:03.433 06:44:17 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:03.433 06:44:17 -- common/autotest_common.sh@908 -- # force=-F 00:07:03.433 06:44:17 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:03.433 mke2fs 1.46.5 (30-Dec-2021) 00:07:03.433 Discarding device blocks: 0/522240 done 00:07:03.433 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:03.433 Filesystem UUID: d6125c7d-a8fc-4764-b900-c927b4a1ddc2 00:07:03.433 Superblock backups stored on blocks: 00:07:03.433 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:03.433 00:07:03.433 Allocating group tables: 0/64 done 00:07:03.433 Writing inode tables: 0/64 done 00:07:04.366 Creating journal (8192 blocks): done 00:07:05.188 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:07:05.188 00:07:05.188 06:44:19 -- common/autotest_common.sh@921 -- # return 0 00:07:05.188 06:44:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:05.188 06:44:19 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:05.446 06:44:19 -- target/filesystem.sh@25 -- # sync 00:07:05.446 06:44:19 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:05.446 06:44:19 -- target/filesystem.sh@27 -- # sync 00:07:05.446 06:44:19 -- target/filesystem.sh@29 -- # i=0 00:07:05.446 06:44:19 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:05.446 06:44:19 -- target/filesystem.sh@37 -- # kill -0 397448 00:07:05.446 06:44:19 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:05.446 06:44:19 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:05.446 06:44:19 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:05.446 06:44:19 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:05.446 00:07:05.446 real 0m2.026s 00:07:05.446 user 0m0.011s 00:07:05.446 sys 0m0.033s 00:07:05.446 06:44:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.446 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.446 ************************************ 00:07:05.446 END TEST filesystem_ext4 00:07:05.446 ************************************ 00:07:05.446 06:44:19 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:05.446 06:44:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:05.446 06:44:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.446 06:44:19 -- common/autotest_common.sh@10 -- # set +x 00:07:05.446 ************************************ 00:07:05.446 START TEST filesystem_btrfs 00:07:05.446 ************************************ 00:07:05.446 06:44:19 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:05.446 06:44:19 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:05.446 06:44:19 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:05.446 06:44:19 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:05.446 06:44:19 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:05.446 06:44:19 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:05.446 06:44:19 -- common/autotest_common.sh@904 -- # local i=0 00:07:05.446 06:44:19 -- common/autotest_common.sh@905 -- # local force 00:07:05.446 06:44:19 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:05.446 06:44:19 -- common/autotest_common.sh@910 -- # force=-f 00:07:05.446 06:44:19 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:05.704 btrfs-progs v6.6.2 00:07:05.704 See https://btrfs.readthedocs.io for more information. 00:07:05.704 00:07:05.704 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:05.704 NOTE: several default settings have changed in version 5.15, please make sure 00:07:05.704 this does not affect your deployments: 00:07:05.704 - DUP for metadata (-m dup) 00:07:05.704 - enabled no-holes (-O no-holes) 00:07:05.704 - enabled free-space-tree (-R free-space-tree) 00:07:05.704 00:07:05.704 Label: (null) 00:07:05.704 UUID: e394b0f8-b805-413a-96c0-ffc26f9ea32a 00:07:05.704 Node size: 16384 00:07:05.704 Sector size: 4096 00:07:05.704 Filesystem size: 510.00MiB 00:07:05.704 Block group profiles: 00:07:05.704 Data: single 8.00MiB 00:07:05.704 Metadata: DUP 32.00MiB 00:07:05.704 System: DUP 8.00MiB 00:07:05.704 SSD detected: yes 00:07:05.704 Zoned device: no 00:07:05.704 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:05.704 Runtime features: free-space-tree 00:07:05.704 Checksum: crc32c 00:07:05.704 Number of devices: 1 00:07:05.704 Devices: 00:07:05.704 ID SIZE PATH 00:07:05.704 1 510.00MiB /dev/nvme0n1p1 00:07:05.704 00:07:05.704 06:44:19 -- common/autotest_common.sh@921 -- # return 0 00:07:05.704 06:44:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:06.636 06:44:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:06.636 06:44:20 -- target/filesystem.sh@25 -- # sync 00:07:06.894 06:44:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:06.894 06:44:20 -- target/filesystem.sh@27 -- # sync 00:07:06.894 06:44:20 -- target/filesystem.sh@29 -- # i=0 00:07:06.894 06:44:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:06.894 06:44:20 -- target/filesystem.sh@37 -- # kill -0 397448 00:07:06.894 06:44:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:06.894 06:44:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:06.894 06:44:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:06.894 06:44:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:06.894 00:07:06.894 real 0m1.425s 00:07:06.894 user 0m0.018s 00:07:06.894 sys 0m0.046s 00:07:06.894 06:44:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.894 06:44:20 -- common/autotest_common.sh@10 -- # set +x 00:07:06.894 ************************************ 00:07:06.894 END TEST filesystem_btrfs 00:07:06.894 ************************************ 00:07:06.894 06:44:20 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:06.894 06:44:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:06.894 06:44:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.894 06:44:20 -- common/autotest_common.sh@10 -- # set +x 00:07:06.894 ************************************ 00:07:06.894 START TEST filesystem_xfs 00:07:06.894 ************************************ 00:07:06.894 06:44:20 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:06.894 06:44:20 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:06.894 06:44:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:06.894 06:44:20 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:06.894 06:44:20 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:06.894 06:44:20 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:06.894 06:44:20 -- common/autotest_common.sh@904 -- # local i=0 00:07:06.894 06:44:20 -- common/autotest_common.sh@905 -- # local force 00:07:06.894 06:44:20 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:06.894 06:44:20 -- common/autotest_common.sh@910 -- # force=-f 00:07:06.894 06:44:20 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:06.894 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:06.894 = sectsz=512 attr=2, projid32bit=1 00:07:06.894 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:06.894 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:06.894 data = bsize=4096 blocks=130560, imaxpct=25 00:07:06.894 = sunit=0 swidth=0 blks 00:07:06.894 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:06.894 log =internal log bsize=4096 blocks=16384, version=2 00:07:06.894 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:06.894 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:08.266 Discarding blocks...Done. 00:07:08.266 06:44:22 -- common/autotest_common.sh@921 -- # return 0 00:07:08.266 06:44:22 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:10.792 06:44:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:10.792 06:44:24 -- target/filesystem.sh@25 -- # sync 00:07:10.792 06:44:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:10.792 06:44:24 -- target/filesystem.sh@27 -- # sync 00:07:10.792 06:44:24 -- target/filesystem.sh@29 -- # i=0 00:07:10.792 06:44:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:10.792 06:44:24 -- target/filesystem.sh@37 -- # kill -0 397448 00:07:10.792 06:44:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:10.792 06:44:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:10.792 06:44:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:10.792 06:44:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:10.792 00:07:10.792 real 0m3.656s 00:07:10.792 user 0m0.019s 00:07:10.792 sys 0m0.040s 00:07:10.792 06:44:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.792 06:44:24 -- common/autotest_common.sh@10 -- # set +x 00:07:10.792 ************************************ 00:07:10.792 END TEST filesystem_xfs 00:07:10.792 ************************************ 00:07:10.792 06:44:24 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:10.792 06:44:24 -- target/filesystem.sh@93 -- # sync 00:07:10.792 06:44:24 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:11.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.051 06:44:25 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:11.051 06:44:25 -- common/autotest_common.sh@1198 -- # local i=0 00:07:11.051 06:44:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:11.051 06:44:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.051 06:44:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:11.051 06:44:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:11.051 06:44:25 -- common/autotest_common.sh@1210 -- # return 0 00:07:11.051 06:44:25 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:11.051 06:44:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:11.051 06:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.051 06:44:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:11.051 06:44:25 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:11.051 06:44:25 -- target/filesystem.sh@101 -- # killprocess 397448 00:07:11.051 06:44:25 -- common/autotest_common.sh@926 -- # '[' -z 397448 ']' 00:07:11.051 06:44:25 -- common/autotest_common.sh@930 -- # kill -0 397448 00:07:11.051 06:44:25 -- common/autotest_common.sh@931 -- # uname 00:07:11.051 06:44:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:11.051 06:44:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 397448 00:07:11.051 06:44:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:11.051 06:44:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:11.051 06:44:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 397448' 00:07:11.051 killing process with pid 397448 00:07:11.051 06:44:25 -- common/autotest_common.sh@945 -- # kill 397448 00:07:11.051 06:44:25 -- common/autotest_common.sh@950 -- # wait 397448 00:07:11.618 06:44:25 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:11.618 00:07:11.618 real 0m13.727s 00:07:11.618 user 0m52.826s 00:07:11.618 sys 0m1.775s 00:07:11.618 06:44:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.618 06:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.618 ************************************ 00:07:11.618 END TEST nvmf_filesystem_no_in_capsule 00:07:11.618 ************************************ 00:07:11.618 06:44:25 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:11.618 06:44:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:11.618 06:44:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.618 06:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.618 ************************************ 00:07:11.618 START TEST nvmf_filesystem_in_capsule 00:07:11.618 ************************************ 00:07:11.618 06:44:25 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:11.618 06:44:25 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:11.618 06:44:25 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:11.618 06:44:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:11.618 06:44:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:11.618 06:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.618 06:44:25 -- nvmf/common.sh@469 -- # nvmfpid=399320 00:07:11.618 06:44:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:11.618 06:44:25 -- nvmf/common.sh@470 -- # waitforlisten 399320 00:07:11.618 06:44:25 -- common/autotest_common.sh@819 -- # '[' -z 399320 ']' 00:07:11.618 06:44:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.618 06:44:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:11.618 06:44:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.618 06:44:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:11.618 06:44:25 -- common/autotest_common.sh@10 -- # set +x 00:07:11.618 [2024-05-15 06:44:25.702888] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:11.618 [2024-05-15 06:44:25.703013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.618 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.618 [2024-05-15 06:44:25.784534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.876 [2024-05-15 06:44:25.903031] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:11.876 [2024-05-15 06:44:25.903195] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.876 [2024-05-15 06:44:25.903215] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.876 [2024-05-15 06:44:25.903230] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.876 [2024-05-15 06:44:25.903299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.876 [2024-05-15 06:44:25.903361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.876 [2024-05-15 06:44:25.903415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.876 [2024-05-15 06:44:25.903418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.442 06:44:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:12.442 06:44:26 -- common/autotest_common.sh@852 -- # return 0 00:07:12.442 06:44:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:12.442 06:44:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:12.442 06:44:26 -- common/autotest_common.sh@10 -- # set +x 00:07:12.700 06:44:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.700 06:44:26 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:12.700 06:44:26 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:12.700 06:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.700 06:44:26 -- common/autotest_common.sh@10 -- # set +x 00:07:12.700 [2024-05-15 06:44:26.692461] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.700 06:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.700 06:44:26 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:12.700 06:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.700 06:44:26 -- common/autotest_common.sh@10 -- # set +x 00:07:12.700 Malloc1 00:07:12.700 06:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.700 06:44:26 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:12.700 06:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.700 06:44:26 -- common/autotest_common.sh@10 -- # set +x 00:07:12.700 06:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.700 06:44:26 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:12.700 06:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.700 06:44:26 -- common/autotest_common.sh@10 -- # set +x 00:07:12.700 06:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.700 06:44:26 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.700 06:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.700 06:44:26 -- common/autotest_common.sh@10 -- # set +x 00:07:12.700 [2024-05-15 06:44:26.872398] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.700 06:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.700 06:44:26 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:12.700 06:44:26 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:12.700 06:44:26 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:12.700 06:44:26 -- common/autotest_common.sh@1359 -- # local bs 00:07:12.700 06:44:26 -- common/autotest_common.sh@1360 -- # local nb 00:07:12.700 06:44:26 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:12.700 06:44:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.700 06:44:26 -- common/autotest_common.sh@10 -- # set +x 00:07:12.700 06:44:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.700 06:44:26 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:12.700 { 00:07:12.700 "name": "Malloc1", 00:07:12.700 "aliases": [ 00:07:12.700 "6704b71e-1f68-4703-9234-58a5baf8dfad" 00:07:12.700 ], 00:07:12.700 "product_name": "Malloc disk", 00:07:12.700 "block_size": 512, 00:07:12.700 "num_blocks": 1048576, 00:07:12.700 "uuid": "6704b71e-1f68-4703-9234-58a5baf8dfad", 00:07:12.700 "assigned_rate_limits": { 00:07:12.700 "rw_ios_per_sec": 0, 00:07:12.700 "rw_mbytes_per_sec": 0, 00:07:12.700 "r_mbytes_per_sec": 0, 00:07:12.700 "w_mbytes_per_sec": 0 00:07:12.700 }, 00:07:12.700 "claimed": true, 00:07:12.700 "claim_type": "exclusive_write", 00:07:12.700 "zoned": false, 00:07:12.700 "supported_io_types": { 00:07:12.700 "read": true, 00:07:12.700 "write": true, 00:07:12.700 "unmap": true, 00:07:12.700 "write_zeroes": true, 00:07:12.700 "flush": true, 00:07:12.700 "reset": true, 00:07:12.700 "compare": false, 00:07:12.700 "compare_and_write": false, 00:07:12.700 "abort": true, 00:07:12.700 "nvme_admin": false, 00:07:12.700 "nvme_io": false 00:07:12.700 }, 00:07:12.700 "memory_domains": [ 00:07:12.700 { 00:07:12.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.700 "dma_device_type": 2 00:07:12.700 } 00:07:12.700 ], 00:07:12.700 "driver_specific": {} 00:07:12.700 } 00:07:12.700 ]' 00:07:12.700 06:44:26 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:12.700 06:44:26 -- common/autotest_common.sh@1362 -- # bs=512 00:07:12.700 06:44:26 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:12.958 06:44:26 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:12.958 06:44:26 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:12.958 06:44:26 -- common/autotest_common.sh@1367 -- # echo 512 00:07:12.958 06:44:26 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:12.958 06:44:26 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.524 06:44:27 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.524 06:44:27 -- common/autotest_common.sh@1177 -- # local i=0 00:07:13.524 06:44:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.524 06:44:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:13.524 06:44:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:15.423 06:44:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:15.423 06:44:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:15.423 06:44:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.423 06:44:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:15.423 06:44:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.423 06:44:29 -- common/autotest_common.sh@1187 -- # return 0 00:07:15.423 06:44:29 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:15.423 06:44:29 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:15.423 06:44:29 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:15.423 06:44:29 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:15.423 06:44:29 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:15.423 06:44:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:15.423 06:44:29 -- setup/common.sh@80 -- # echo 536870912 00:07:15.423 06:44:29 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:15.423 06:44:29 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:15.423 06:44:29 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:15.423 06:44:29 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:15.681 06:44:29 -- target/filesystem.sh@69 -- # partprobe 00:07:16.246 06:44:30 -- target/filesystem.sh@70 -- # sleep 1 00:07:17.181 06:44:31 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:17.181 06:44:31 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:17.181 06:44:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:17.181 06:44:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.181 06:44:31 -- common/autotest_common.sh@10 -- # set +x 00:07:17.181 ************************************ 00:07:17.181 START TEST filesystem_in_capsule_ext4 00:07:17.181 ************************************ 00:07:17.181 06:44:31 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:17.181 06:44:31 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:17.181 06:44:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:17.181 06:44:31 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:17.181 06:44:31 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:17.181 06:44:31 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:17.181 06:44:31 -- common/autotest_common.sh@904 -- # local i=0 00:07:17.181 06:44:31 -- common/autotest_common.sh@905 -- # local force 00:07:17.181 06:44:31 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:17.181 06:44:31 -- common/autotest_common.sh@908 -- # force=-F 00:07:17.181 06:44:31 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:17.181 mke2fs 1.46.5 (30-Dec-2021) 00:07:17.439 Discarding device blocks: 0/522240 done 00:07:17.439 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:17.439 Filesystem UUID: 9d666b54-9953-4d46-90cf-a7d32037a6e2 00:07:17.439 Superblock backups stored on blocks: 00:07:17.439 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:17.439 00:07:17.439 Allocating group tables: 0/64 done 00:07:17.439 Writing inode tables: 0/64 done 00:07:18.004 Creating journal (8192 blocks): done 00:07:18.004 Writing superblocks and filesystem accounting information: 0/64 done 00:07:18.004 00:07:18.004 06:44:32 -- common/autotest_common.sh@921 -- # return 0 00:07:18.004 06:44:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:18.263 06:44:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:18.263 06:44:32 -- target/filesystem.sh@25 -- # sync 00:07:18.263 06:44:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:18.263 06:44:32 -- target/filesystem.sh@27 -- # sync 00:07:18.263 06:44:32 -- target/filesystem.sh@29 -- # i=0 00:07:18.263 06:44:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:18.263 06:44:32 -- target/filesystem.sh@37 -- # kill -0 399320 00:07:18.263 06:44:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:18.263 06:44:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:18.263 06:44:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:18.263 06:44:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:18.263 00:07:18.263 real 0m1.091s 00:07:18.263 user 0m0.019s 00:07:18.263 sys 0m0.033s 00:07:18.263 06:44:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.263 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.263 ************************************ 00:07:18.263 END TEST filesystem_in_capsule_ext4 00:07:18.263 ************************************ 00:07:18.263 06:44:32 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:18.263 06:44:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:18.263 06:44:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.263 06:44:32 -- common/autotest_common.sh@10 -- # set +x 00:07:18.263 ************************************ 00:07:18.263 START TEST filesystem_in_capsule_btrfs 00:07:18.263 ************************************ 00:07:18.263 06:44:32 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:18.263 06:44:32 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:18.263 06:44:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:18.263 06:44:32 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:18.263 06:44:32 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:18.263 06:44:32 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:18.263 06:44:32 -- common/autotest_common.sh@904 -- # local i=0 00:07:18.263 06:44:32 -- common/autotest_common.sh@905 -- # local force 00:07:18.263 06:44:32 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:18.263 06:44:32 -- common/autotest_common.sh@910 -- # force=-f 00:07:18.263 06:44:32 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:18.868 btrfs-progs v6.6.2 00:07:18.868 See https://btrfs.readthedocs.io for more information. 00:07:18.868 00:07:18.868 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:18.868 NOTE: several default settings have changed in version 5.15, please make sure 00:07:18.868 this does not affect your deployments: 00:07:18.868 - DUP for metadata (-m dup) 00:07:18.868 - enabled no-holes (-O no-holes) 00:07:18.868 - enabled free-space-tree (-R free-space-tree) 00:07:18.868 00:07:18.868 Label: (null) 00:07:18.868 UUID: c11e9df0-899a-49a5-8a81-935e30183307 00:07:18.868 Node size: 16384 00:07:18.868 Sector size: 4096 00:07:18.868 Filesystem size: 510.00MiB 00:07:18.868 Block group profiles: 00:07:18.868 Data: single 8.00MiB 00:07:18.868 Metadata: DUP 32.00MiB 00:07:18.868 System: DUP 8.00MiB 00:07:18.868 SSD detected: yes 00:07:18.868 Zoned device: no 00:07:18.868 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:18.868 Runtime features: free-space-tree 00:07:18.868 Checksum: crc32c 00:07:18.868 Number of devices: 1 00:07:18.868 Devices: 00:07:18.868 ID SIZE PATH 00:07:18.868 1 510.00MiB /dev/nvme0n1p1 00:07:18.868 00:07:18.868 06:44:32 -- common/autotest_common.sh@921 -- # return 0 00:07:18.868 06:44:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:19.813 06:44:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:19.813 06:44:33 -- target/filesystem.sh@25 -- # sync 00:07:19.813 06:44:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:19.813 06:44:33 -- target/filesystem.sh@27 -- # sync 00:07:19.813 06:44:33 -- target/filesystem.sh@29 -- # i=0 00:07:19.813 06:44:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:19.813 06:44:33 -- target/filesystem.sh@37 -- # kill -0 399320 00:07:19.813 06:44:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:19.813 06:44:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:19.813 06:44:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:19.813 06:44:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:19.813 00:07:19.813 real 0m1.325s 00:07:19.813 user 0m0.017s 00:07:19.813 sys 0m0.039s 00:07:19.813 06:44:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.813 06:44:33 -- common/autotest_common.sh@10 -- # set +x 00:07:19.813 ************************************ 00:07:19.813 END TEST filesystem_in_capsule_btrfs 00:07:19.813 ************************************ 00:07:19.813 06:44:33 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:19.813 06:44:33 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:19.813 06:44:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.813 06:44:33 -- common/autotest_common.sh@10 -- # set +x 00:07:19.813 ************************************ 00:07:19.813 START TEST filesystem_in_capsule_xfs 00:07:19.813 ************************************ 00:07:19.814 06:44:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:19.814 06:44:33 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:19.814 06:44:33 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:19.814 06:44:33 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:19.814 06:44:33 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:19.814 06:44:33 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:19.814 06:44:33 -- common/autotest_common.sh@904 -- # local i=0 00:07:19.814 06:44:33 -- common/autotest_common.sh@905 -- # local force 00:07:19.814 06:44:33 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:19.814 06:44:33 -- common/autotest_common.sh@910 -- # force=-f 00:07:19.814 06:44:33 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:19.814 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:19.814 = sectsz=512 attr=2, projid32bit=1 00:07:19.814 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:19.814 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:19.814 data = bsize=4096 blocks=130560, imaxpct=25 00:07:19.814 = sunit=0 swidth=0 blks 00:07:19.814 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:19.814 log =internal log bsize=4096 blocks=16384, version=2 00:07:19.814 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:19.814 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:20.379 Discarding blocks...Done. 00:07:20.379 06:44:34 -- common/autotest_common.sh@921 -- # return 0 00:07:20.379 06:44:34 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:22.907 06:44:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:22.907 06:44:36 -- target/filesystem.sh@25 -- # sync 00:07:22.907 06:44:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:22.907 06:44:36 -- target/filesystem.sh@27 -- # sync 00:07:22.907 06:44:36 -- target/filesystem.sh@29 -- # i=0 00:07:22.907 06:44:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:22.907 06:44:36 -- target/filesystem.sh@37 -- # kill -0 399320 00:07:22.907 06:44:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:22.907 06:44:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:22.907 06:44:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:22.907 06:44:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:22.907 00:07:22.907 real 0m3.023s 00:07:22.907 user 0m0.018s 00:07:22.907 sys 0m0.032s 00:07:22.907 06:44:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.907 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:07:22.907 ************************************ 00:07:22.907 END TEST filesystem_in_capsule_xfs 00:07:22.907 ************************************ 00:07:22.907 06:44:36 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:22.907 06:44:36 -- target/filesystem.sh@93 -- # sync 00:07:22.907 06:44:36 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:22.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.907 06:44:36 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:22.907 06:44:36 -- common/autotest_common.sh@1198 -- # local i=0 00:07:22.907 06:44:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:22.907 06:44:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.907 06:44:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:22.907 06:44:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.907 06:44:36 -- common/autotest_common.sh@1210 -- # return 0 00:07:22.907 06:44:36 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.907 06:44:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:22.907 06:44:36 -- common/autotest_common.sh@10 -- # set +x 00:07:22.907 06:44:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:22.907 06:44:36 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:22.907 06:44:36 -- target/filesystem.sh@101 -- # killprocess 399320 00:07:22.907 06:44:36 -- common/autotest_common.sh@926 -- # '[' -z 399320 ']' 00:07:22.907 06:44:36 -- common/autotest_common.sh@930 -- # kill -0 399320 00:07:22.907 06:44:36 -- common/autotest_common.sh@931 -- # uname 00:07:22.907 06:44:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:22.907 06:44:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 399320 00:07:22.907 06:44:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:22.907 06:44:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:22.907 06:44:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 399320' 00:07:22.907 killing process with pid 399320 00:07:22.907 06:44:37 -- common/autotest_common.sh@945 -- # kill 399320 00:07:22.907 06:44:37 -- common/autotest_common.sh@950 -- # wait 399320 00:07:23.475 06:44:37 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:23.475 00:07:23.475 real 0m11.843s 00:07:23.475 user 0m45.357s 00:07:23.475 sys 0m1.659s 00:07:23.475 06:44:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.475 06:44:37 -- common/autotest_common.sh@10 -- # set +x 00:07:23.475 ************************************ 00:07:23.475 END TEST nvmf_filesystem_in_capsule 00:07:23.475 ************************************ 00:07:23.475 06:44:37 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:23.475 06:44:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:23.475 06:44:37 -- nvmf/common.sh@116 -- # sync 00:07:23.475 06:44:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:23.475 06:44:37 -- nvmf/common.sh@119 -- # set +e 00:07:23.475 06:44:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:23.475 06:44:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:23.475 rmmod nvme_tcp 00:07:23.475 rmmod nvme_fabrics 00:07:23.475 rmmod nvme_keyring 00:07:23.475 06:44:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:23.475 06:44:37 -- nvmf/common.sh@123 -- # set -e 00:07:23.475 06:44:37 -- nvmf/common.sh@124 -- # return 0 00:07:23.475 06:44:37 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:23.475 06:44:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:23.475 06:44:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:23.475 06:44:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:23.475 06:44:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.475 06:44:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:23.475 06:44:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.475 06:44:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.475 06:44:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.384 06:44:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:25.384 00:07:25.384 real 0m30.412s 00:07:25.384 user 1m39.179s 00:07:25.384 sys 0m5.316s 00:07:25.384 06:44:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.384 06:44:39 -- common/autotest_common.sh@10 -- # set +x 00:07:25.384 ************************************ 00:07:25.384 END TEST nvmf_filesystem 00:07:25.384 ************************************ 00:07:25.642 06:44:39 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:25.642 06:44:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:25.642 06:44:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.642 06:44:39 -- common/autotest_common.sh@10 -- # set +x 00:07:25.642 ************************************ 00:07:25.642 START TEST nvmf_discovery 00:07:25.642 ************************************ 00:07:25.642 06:44:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:25.642 * Looking for test storage... 00:07:25.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.642 06:44:39 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.642 06:44:39 -- nvmf/common.sh@7 -- # uname -s 00:07:25.642 06:44:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.642 06:44:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.642 06:44:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.642 06:44:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.642 06:44:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.642 06:44:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.642 06:44:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.642 06:44:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.642 06:44:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.642 06:44:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.642 06:44:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.642 06:44:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:25.642 06:44:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.642 06:44:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.642 06:44:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.642 06:44:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.642 06:44:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.643 06:44:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.643 06:44:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.643 06:44:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.643 06:44:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.643 06:44:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.643 06:44:39 -- paths/export.sh@5 -- # export PATH 00:07:25.643 06:44:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.643 06:44:39 -- nvmf/common.sh@46 -- # : 0 00:07:25.643 06:44:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:25.643 06:44:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:25.643 06:44:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:25.643 06:44:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.643 06:44:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.643 06:44:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:25.643 06:44:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:25.643 06:44:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:25.643 06:44:39 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:25.643 06:44:39 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:25.643 06:44:39 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:25.643 06:44:39 -- target/discovery.sh@15 -- # hash nvme 00:07:25.643 06:44:39 -- target/discovery.sh@20 -- # nvmftestinit 00:07:25.643 06:44:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:25.643 06:44:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.643 06:44:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:25.643 06:44:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:25.643 06:44:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:25.643 06:44:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.643 06:44:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.643 06:44:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.643 06:44:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:25.643 06:44:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:25.643 06:44:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:25.643 06:44:39 -- common/autotest_common.sh@10 -- # set +x 00:07:28.171 06:44:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:28.171 06:44:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:28.171 06:44:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:28.171 06:44:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:28.171 06:44:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:28.171 06:44:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:28.171 06:44:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:28.171 06:44:42 -- nvmf/common.sh@294 -- # net_devs=() 00:07:28.171 06:44:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:28.171 06:44:42 -- nvmf/common.sh@295 -- # e810=() 00:07:28.171 06:44:42 -- nvmf/common.sh@295 -- # local -ga e810 00:07:28.171 06:44:42 -- nvmf/common.sh@296 -- # x722=() 00:07:28.171 06:44:42 -- nvmf/common.sh@296 -- # local -ga x722 00:07:28.171 06:44:42 -- nvmf/common.sh@297 -- # mlx=() 00:07:28.171 06:44:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:28.171 06:44:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.171 06:44:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.171 06:44:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.171 06:44:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.171 06:44:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.171 06:44:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.171 06:44:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.171 06:44:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.171 06:44:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.172 06:44:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.172 06:44:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.172 06:44:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:28.172 06:44:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:28.172 06:44:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:28.172 06:44:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:28.172 06:44:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:28.172 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:28.172 06:44:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:28.172 06:44:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:28.172 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:28.172 06:44:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:28.172 06:44:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:28.172 06:44:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.172 06:44:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:28.172 06:44:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.172 06:44:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:28.172 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:28.172 06:44:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.172 06:44:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:28.172 06:44:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.172 06:44:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:28.172 06:44:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.172 06:44:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:28.172 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:28.172 06:44:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.172 06:44:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:28.172 06:44:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:28.172 06:44:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:28.172 06:44:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.172 06:44:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.172 06:44:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.172 06:44:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:28.172 06:44:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.172 06:44:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.172 06:44:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:28.172 06:44:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.172 06:44:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.172 06:44:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:28.172 06:44:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:28.172 06:44:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.172 06:44:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.172 06:44:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.172 06:44:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.172 06:44:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:28.172 06:44:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.172 06:44:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.172 06:44:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.172 06:44:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:28.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:07:28.172 00:07:28.172 --- 10.0.0.2 ping statistics --- 00:07:28.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.172 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:07:28.172 06:44:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:07:28.172 00:07:28.172 --- 10.0.0.1 ping statistics --- 00:07:28.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.172 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:07:28.172 06:44:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.172 06:44:42 -- nvmf/common.sh@410 -- # return 0 00:07:28.172 06:44:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:28.172 06:44:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.172 06:44:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:28.172 06:44:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.172 06:44:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:28.172 06:44:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:28.172 06:44:42 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:28.172 06:44:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:28.172 06:44:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:28.172 06:44:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.172 06:44:42 -- nvmf/common.sh@469 -- # nvmfpid=403149 00:07:28.172 06:44:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:28.172 06:44:42 -- nvmf/common.sh@470 -- # waitforlisten 403149 00:07:28.172 06:44:42 -- common/autotest_common.sh@819 -- # '[' -z 403149 ']' 00:07:28.172 06:44:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.172 06:44:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:28.172 06:44:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.172 06:44:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:28.172 06:44:42 -- common/autotest_common.sh@10 -- # set +x 00:07:28.172 [2024-05-15 06:44:42.346010] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:28.172 [2024-05-15 06:44:42.346100] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.172 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.430 [2024-05-15 06:44:42.425811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.430 [2024-05-15 06:44:42.533836] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:28.430 [2024-05-15 06:44:42.533996] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.430 [2024-05-15 06:44:42.534015] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.430 [2024-05-15 06:44:42.534027] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.430 [2024-05-15 06:44:42.534104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.430 [2024-05-15 06:44:42.534165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.430 [2024-05-15 06:44:42.534234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.430 [2024-05-15 06:44:42.534237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.361 06:44:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:29.361 06:44:43 -- common/autotest_common.sh@852 -- # return 0 00:07:29.361 06:44:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:29.361 06:44:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:29.361 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.361 06:44:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.361 06:44:43 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:29.361 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.361 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.361 [2024-05-15 06:44:43.311392] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.361 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.361 06:44:43 -- target/discovery.sh@26 -- # seq 1 4 00:07:29.361 06:44:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:29.361 06:44:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:29.361 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.361 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.361 Null1 00:07:29.361 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.361 06:44:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:29.361 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.361 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.361 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.361 06:44:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:29.361 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.361 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.361 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.361 06:44:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 [2024-05-15 06:44:43.351673] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:29.362 06:44:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 Null2 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:29.362 06:44:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 Null3 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:29.362 06:44:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 Null4 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.362 06:44:43 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:29.362 00:07:29.362 Discovery Log Number of Records 6, Generation counter 6 00:07:29.362 =====Discovery Log Entry 0====== 00:07:29.362 trtype: tcp 00:07:29.362 adrfam: ipv4 00:07:29.362 subtype: current discovery subsystem 00:07:29.362 treq: not required 00:07:29.362 portid: 0 00:07:29.362 trsvcid: 4420 00:07:29.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:29.362 traddr: 10.0.0.2 00:07:29.362 eflags: explicit discovery connections, duplicate discovery information 00:07:29.362 sectype: none 00:07:29.362 =====Discovery Log Entry 1====== 00:07:29.362 trtype: tcp 00:07:29.362 adrfam: ipv4 00:07:29.362 subtype: nvme subsystem 00:07:29.362 treq: not required 00:07:29.362 portid: 0 00:07:29.362 trsvcid: 4420 00:07:29.362 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:29.362 traddr: 10.0.0.2 00:07:29.362 eflags: none 00:07:29.362 sectype: none 00:07:29.362 =====Discovery Log Entry 2====== 00:07:29.362 trtype: tcp 00:07:29.362 adrfam: ipv4 00:07:29.362 subtype: nvme subsystem 00:07:29.362 treq: not required 00:07:29.362 portid: 0 00:07:29.362 trsvcid: 4420 00:07:29.362 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:29.362 traddr: 10.0.0.2 00:07:29.362 eflags: none 00:07:29.362 sectype: none 00:07:29.362 =====Discovery Log Entry 3====== 00:07:29.362 trtype: tcp 00:07:29.362 adrfam: ipv4 00:07:29.362 subtype: nvme subsystem 00:07:29.362 treq: not required 00:07:29.362 portid: 0 00:07:29.362 trsvcid: 4420 00:07:29.362 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:29.362 traddr: 10.0.0.2 00:07:29.362 eflags: none 00:07:29.362 sectype: none 00:07:29.362 =====Discovery Log Entry 4====== 00:07:29.362 trtype: tcp 00:07:29.362 adrfam: ipv4 00:07:29.362 subtype: nvme subsystem 00:07:29.362 treq: not required 00:07:29.362 portid: 0 00:07:29.362 trsvcid: 4420 00:07:29.362 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:29.362 traddr: 10.0.0.2 00:07:29.362 eflags: none 00:07:29.362 sectype: none 00:07:29.362 =====Discovery Log Entry 5====== 00:07:29.362 trtype: tcp 00:07:29.362 adrfam: ipv4 00:07:29.362 subtype: discovery subsystem referral 00:07:29.362 treq: not required 00:07:29.362 portid: 0 00:07:29.362 trsvcid: 4430 00:07:29.362 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:29.362 traddr: 10.0.0.2 00:07:29.362 eflags: none 00:07:29.362 sectype: none 00:07:29.362 06:44:43 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:29.362 Perform nvmf subsystem discovery via RPC 00:07:29.362 06:44:43 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:29.362 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.362 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 [2024-05-15 06:44:43.536089] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:29.362 [ 00:07:29.362 { 00:07:29.362 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:29.362 "subtype": "Discovery", 00:07:29.362 "listen_addresses": [ 00:07:29.362 { 00:07:29.362 "transport": "TCP", 00:07:29.362 "trtype": "TCP", 00:07:29.362 "adrfam": "IPv4", 00:07:29.362 "traddr": "10.0.0.2", 00:07:29.362 "trsvcid": "4420" 00:07:29.362 } 00:07:29.362 ], 00:07:29.362 "allow_any_host": true, 00:07:29.362 "hosts": [] 00:07:29.362 }, 00:07:29.362 { 00:07:29.362 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:29.362 "subtype": "NVMe", 00:07:29.362 "listen_addresses": [ 00:07:29.362 { 00:07:29.362 "transport": "TCP", 00:07:29.362 "trtype": "TCP", 00:07:29.362 "adrfam": "IPv4", 00:07:29.362 "traddr": "10.0.0.2", 00:07:29.362 "trsvcid": "4420" 00:07:29.362 } 00:07:29.362 ], 00:07:29.362 "allow_any_host": true, 00:07:29.362 "hosts": [], 00:07:29.362 "serial_number": "SPDK00000000000001", 00:07:29.362 "model_number": "SPDK bdev Controller", 00:07:29.362 "max_namespaces": 32, 00:07:29.362 "min_cntlid": 1, 00:07:29.362 "max_cntlid": 65519, 00:07:29.362 "namespaces": [ 00:07:29.362 { 00:07:29.362 "nsid": 1, 00:07:29.362 "bdev_name": "Null1", 00:07:29.362 "name": "Null1", 00:07:29.362 "nguid": "DD62B396D88D410B87C6EE44FBBA80F5", 00:07:29.363 "uuid": "dd62b396-d88d-410b-87c6-ee44fbba80f5" 00:07:29.363 } 00:07:29.363 ] 00:07:29.363 }, 00:07:29.363 { 00:07:29.363 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:29.363 "subtype": "NVMe", 00:07:29.363 "listen_addresses": [ 00:07:29.363 { 00:07:29.363 "transport": "TCP", 00:07:29.363 "trtype": "TCP", 00:07:29.363 "adrfam": "IPv4", 00:07:29.363 "traddr": "10.0.0.2", 00:07:29.363 "trsvcid": "4420" 00:07:29.363 } 00:07:29.363 ], 00:07:29.363 "allow_any_host": true, 00:07:29.363 "hosts": [], 00:07:29.363 "serial_number": "SPDK00000000000002", 00:07:29.363 "model_number": "SPDK bdev Controller", 00:07:29.363 "max_namespaces": 32, 00:07:29.363 "min_cntlid": 1, 00:07:29.363 "max_cntlid": 65519, 00:07:29.363 "namespaces": [ 00:07:29.363 { 00:07:29.363 "nsid": 1, 00:07:29.363 "bdev_name": "Null2", 00:07:29.363 "name": "Null2", 00:07:29.363 "nguid": "7E7525DA59E74CB39509E0B271996790", 00:07:29.363 "uuid": "7e7525da-59e7-4cb3-9509-e0b271996790" 00:07:29.363 } 00:07:29.363 ] 00:07:29.363 }, 00:07:29.363 { 00:07:29.363 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:29.363 "subtype": "NVMe", 00:07:29.363 "listen_addresses": [ 00:07:29.363 { 00:07:29.363 "transport": "TCP", 00:07:29.363 "trtype": "TCP", 00:07:29.363 "adrfam": "IPv4", 00:07:29.363 "traddr": "10.0.0.2", 00:07:29.363 "trsvcid": "4420" 00:07:29.363 } 00:07:29.363 ], 00:07:29.363 "allow_any_host": true, 00:07:29.363 "hosts": [], 00:07:29.363 "serial_number": "SPDK00000000000003", 00:07:29.363 "model_number": "SPDK bdev Controller", 00:07:29.363 "max_namespaces": 32, 00:07:29.363 "min_cntlid": 1, 00:07:29.363 "max_cntlid": 65519, 00:07:29.363 "namespaces": [ 00:07:29.363 { 00:07:29.363 "nsid": 1, 00:07:29.363 "bdev_name": "Null3", 00:07:29.363 "name": "Null3", 00:07:29.363 "nguid": "08DF02D2B9954F4C9744E270BCA0643C", 00:07:29.363 "uuid": "08df02d2-b995-4f4c-9744-e270bca0643c" 00:07:29.363 } 00:07:29.363 ] 00:07:29.363 }, 00:07:29.363 { 00:07:29.363 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:29.363 "subtype": "NVMe", 00:07:29.363 "listen_addresses": [ 00:07:29.363 { 00:07:29.363 "transport": "TCP", 00:07:29.363 "trtype": "TCP", 00:07:29.363 "adrfam": "IPv4", 00:07:29.363 "traddr": "10.0.0.2", 00:07:29.363 "trsvcid": "4420" 00:07:29.363 } 00:07:29.363 ], 00:07:29.363 "allow_any_host": true, 00:07:29.363 "hosts": [], 00:07:29.363 "serial_number": "SPDK00000000000004", 00:07:29.363 "model_number": "SPDK bdev Controller", 00:07:29.363 "max_namespaces": 32, 00:07:29.363 "min_cntlid": 1, 00:07:29.363 "max_cntlid": 65519, 00:07:29.363 "namespaces": [ 00:07:29.363 { 00:07:29.363 "nsid": 1, 00:07:29.363 "bdev_name": "Null4", 00:07:29.363 "name": "Null4", 00:07:29.363 "nguid": "D06CE04424F44EA2A8AA9C60A515296B", 00:07:29.363 "uuid": "d06ce044-24f4-4ea2-a8aa-9c60a515296b" 00:07:29.363 } 00:07:29.363 ] 00:07:29.363 } 00:07:29.363 ] 00:07:29.363 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.363 06:44:43 -- target/discovery.sh@42 -- # seq 1 4 00:07:29.363 06:44:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:29.363 06:44:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:29.363 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.363 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.363 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.363 06:44:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:29.363 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.363 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.363 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.363 06:44:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:29.363 06:44:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:29.363 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.363 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.363 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.363 06:44:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:29.363 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.363 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.363 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.363 06:44:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:29.363 06:44:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:29.363 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.363 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.363 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.363 06:44:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:29.363 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.363 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.621 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.621 06:44:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:29.621 06:44:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:29.621 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.621 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.621 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.621 06:44:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:29.621 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.621 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.621 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.621 06:44:43 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:29.621 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.621 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.621 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.621 06:44:43 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:29.621 06:44:43 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:29.621 06:44:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:29.621 06:44:43 -- common/autotest_common.sh@10 -- # set +x 00:07:29.621 06:44:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:29.621 06:44:43 -- target/discovery.sh@49 -- # check_bdevs= 00:07:29.621 06:44:43 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:29.621 06:44:43 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:29.621 06:44:43 -- target/discovery.sh@57 -- # nvmftestfini 00:07:29.621 06:44:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:29.621 06:44:43 -- nvmf/common.sh@116 -- # sync 00:07:29.621 06:44:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:29.621 06:44:43 -- nvmf/common.sh@119 -- # set +e 00:07:29.621 06:44:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:29.621 06:44:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:29.621 rmmod nvme_tcp 00:07:29.621 rmmod nvme_fabrics 00:07:29.621 rmmod nvme_keyring 00:07:29.621 06:44:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:29.621 06:44:43 -- nvmf/common.sh@123 -- # set -e 00:07:29.621 06:44:43 -- nvmf/common.sh@124 -- # return 0 00:07:29.621 06:44:43 -- nvmf/common.sh@477 -- # '[' -n 403149 ']' 00:07:29.621 06:44:43 -- nvmf/common.sh@478 -- # killprocess 403149 00:07:29.621 06:44:43 -- common/autotest_common.sh@926 -- # '[' -z 403149 ']' 00:07:29.621 06:44:43 -- common/autotest_common.sh@930 -- # kill -0 403149 00:07:29.621 06:44:43 -- common/autotest_common.sh@931 -- # uname 00:07:29.621 06:44:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:29.621 06:44:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 403149 00:07:29.621 06:44:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:29.621 06:44:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:29.621 06:44:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 403149' 00:07:29.621 killing process with pid 403149 00:07:29.621 06:44:43 -- common/autotest_common.sh@945 -- # kill 403149 00:07:29.621 [2024-05-15 06:44:43.738988] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:29.621 06:44:43 -- common/autotest_common.sh@950 -- # wait 403149 00:07:29.879 06:44:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:29.879 06:44:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:29.879 06:44:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:29.879 06:44:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:29.879 06:44:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:29.879 06:44:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.879 06:44:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.879 06:44:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.416 06:44:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:32.416 00:07:32.416 real 0m6.429s 00:07:32.416 user 0m6.905s 00:07:32.416 sys 0m2.133s 00:07:32.416 06:44:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.416 06:44:46 -- common/autotest_common.sh@10 -- # set +x 00:07:32.416 ************************************ 00:07:32.416 END TEST nvmf_discovery 00:07:32.416 ************************************ 00:07:32.416 06:44:46 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:32.416 06:44:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:32.416 06:44:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.416 06:44:46 -- common/autotest_common.sh@10 -- # set +x 00:07:32.416 ************************************ 00:07:32.416 START TEST nvmf_referrals 00:07:32.416 ************************************ 00:07:32.416 06:44:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:32.416 * Looking for test storage... 00:07:32.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.416 06:44:46 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.416 06:44:46 -- nvmf/common.sh@7 -- # uname -s 00:07:32.416 06:44:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.416 06:44:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.416 06:44:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.416 06:44:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.416 06:44:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.416 06:44:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.416 06:44:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.416 06:44:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.416 06:44:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.416 06:44:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.416 06:44:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.416 06:44:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.416 06:44:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.416 06:44:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.416 06:44:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.416 06:44:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.416 06:44:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.416 06:44:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.416 06:44:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.417 06:44:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.417 06:44:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.417 06:44:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.417 06:44:46 -- paths/export.sh@5 -- # export PATH 00:07:32.417 06:44:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.417 06:44:46 -- nvmf/common.sh@46 -- # : 0 00:07:32.417 06:44:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:32.417 06:44:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:32.417 06:44:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:32.417 06:44:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.417 06:44:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.417 06:44:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:32.417 06:44:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:32.417 06:44:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:32.417 06:44:46 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:32.417 06:44:46 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:32.417 06:44:46 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:32.417 06:44:46 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:32.417 06:44:46 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:32.417 06:44:46 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:32.417 06:44:46 -- target/referrals.sh@37 -- # nvmftestinit 00:07:32.417 06:44:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:32.417 06:44:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.417 06:44:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:32.417 06:44:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:32.417 06:44:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:32.417 06:44:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.417 06:44:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.417 06:44:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.417 06:44:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:32.417 06:44:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:32.417 06:44:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:32.417 06:44:46 -- common/autotest_common.sh@10 -- # set +x 00:07:34.949 06:44:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:34.949 06:44:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:34.949 06:44:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:34.949 06:44:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:34.949 06:44:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:34.949 06:44:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:34.949 06:44:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:34.949 06:44:48 -- nvmf/common.sh@294 -- # net_devs=() 00:07:34.949 06:44:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:34.949 06:44:48 -- nvmf/common.sh@295 -- # e810=() 00:07:34.949 06:44:48 -- nvmf/common.sh@295 -- # local -ga e810 00:07:34.949 06:44:48 -- nvmf/common.sh@296 -- # x722=() 00:07:34.949 06:44:48 -- nvmf/common.sh@296 -- # local -ga x722 00:07:34.949 06:44:48 -- nvmf/common.sh@297 -- # mlx=() 00:07:34.949 06:44:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:34.949 06:44:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.949 06:44:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.949 06:44:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.949 06:44:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.949 06:44:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.949 06:44:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.949 06:44:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.949 06:44:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.949 06:44:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.949 06:44:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.949 06:44:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.949 06:44:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:34.949 06:44:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:34.949 06:44:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:34.949 06:44:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:34.949 06:44:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:34.949 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:34.949 06:44:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:34.949 06:44:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:34.949 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:34.949 06:44:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:34.949 06:44:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:34.949 06:44:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.949 06:44:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:34.949 06:44:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.949 06:44:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:34.949 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:34.949 06:44:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.949 06:44:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:34.949 06:44:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.949 06:44:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:34.949 06:44:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.949 06:44:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:34.949 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:34.949 06:44:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.949 06:44:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:34.949 06:44:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:34.949 06:44:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:34.949 06:44:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:34.949 06:44:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.949 06:44:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.949 06:44:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.949 06:44:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:34.949 06:44:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.949 06:44:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.949 06:44:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:34.949 06:44:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.949 06:44:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.949 06:44:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:34.949 06:44:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:34.949 06:44:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.949 06:44:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.949 06:44:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.949 06:44:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.949 06:44:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:34.949 06:44:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.949 06:44:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.949 06:44:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.949 06:44:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:34.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:07:34.949 00:07:34.949 --- 10.0.0.2 ping statistics --- 00:07:34.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.949 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:07:34.949 06:44:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:07:34.950 00:07:34.950 --- 10.0.0.1 ping statistics --- 00:07:34.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.950 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:07:34.950 06:44:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.950 06:44:48 -- nvmf/common.sh@410 -- # return 0 00:07:34.950 06:44:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:34.950 06:44:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.950 06:44:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:34.950 06:44:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:34.950 06:44:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.950 06:44:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:34.950 06:44:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:34.950 06:44:48 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:34.950 06:44:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:34.950 06:44:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:34.950 06:44:48 -- common/autotest_common.sh@10 -- # set +x 00:07:34.950 06:44:48 -- nvmf/common.sh@469 -- # nvmfpid=405674 00:07:34.950 06:44:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.950 06:44:48 -- nvmf/common.sh@470 -- # waitforlisten 405674 00:07:34.950 06:44:48 -- common/autotest_common.sh@819 -- # '[' -z 405674 ']' 00:07:34.950 06:44:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.950 06:44:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:34.950 06:44:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.950 06:44:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:34.950 06:44:48 -- common/autotest_common.sh@10 -- # set +x 00:07:34.950 [2024-05-15 06:44:48.771992] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:34.950 [2024-05-15 06:44:48.772067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.950 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.950 [2024-05-15 06:44:48.852509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.950 [2024-05-15 06:44:48.971988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:34.950 [2024-05-15 06:44:48.972151] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.950 [2024-05-15 06:44:48.972170] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.950 [2024-05-15 06:44:48.972184] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.950 [2024-05-15 06:44:48.972249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.950 [2024-05-15 06:44:48.972306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.950 [2024-05-15 06:44:48.972359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.950 [2024-05-15 06:44:48.972362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.891 06:44:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:35.891 06:44:49 -- common/autotest_common.sh@852 -- # return 0 00:07:35.891 06:44:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:35.891 06:44:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:35.891 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.891 06:44:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.891 06:44:49 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.891 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.892 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 [2024-05-15 06:44:49.828707] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.892 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.892 06:44:49 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:35.892 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.892 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 [2024-05-15 06:44:49.840894] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:35.892 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.892 06:44:49 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:35.892 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.892 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.892 06:44:49 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:35.892 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.892 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.892 06:44:49 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:35.892 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.892 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.892 06:44:49 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.892 06:44:49 -- target/referrals.sh@48 -- # jq length 00:07:35.892 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.892 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.892 06:44:49 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:35.892 06:44:49 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:35.892 06:44:49 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:35.892 06:44:49 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.892 06:44:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.892 06:44:49 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:35.892 06:44:49 -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 06:44:49 -- target/referrals.sh@21 -- # sort 00:07:35.892 06:44:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.892 06:44:49 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:35.892 06:44:49 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:35.892 06:44:49 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:35.892 06:44:49 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:35.892 06:44:49 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:35.892 06:44:49 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:35.892 06:44:49 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:35.892 06:44:49 -- target/referrals.sh@26 -- # sort 00:07:35.892 06:44:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:35.892 06:44:50 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:35.892 06:44:50 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:35.892 06:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.892 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 06:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.892 06:44:50 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:35.892 06:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.892 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 06:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.892 06:44:50 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:35.892 06:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.892 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 06:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.892 06:44:50 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:35.892 06:44:50 -- target/referrals.sh@56 -- # jq length 00:07:35.892 06:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.892 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:35.892 06:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.155 06:44:50 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:36.155 06:44:50 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:36.155 06:44:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:36.155 06:44:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:36.155 06:44:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.155 06:44:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:36.155 06:44:50 -- target/referrals.sh@26 -- # sort 00:07:36.155 06:44:50 -- target/referrals.sh@26 -- # echo 00:07:36.155 06:44:50 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:36.155 06:44:50 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:36.155 06:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.155 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.155 06:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.155 06:44:50 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:36.155 06:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.155 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.155 06:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.155 06:44:50 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:36.155 06:44:50 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:36.155 06:44:50 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:36.155 06:44:50 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:36.155 06:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.155 06:44:50 -- target/referrals.sh@21 -- # sort 00:07:36.155 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.155 06:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.155 06:44:50 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:36.155 06:44:50 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:36.155 06:44:50 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:36.155 06:44:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:36.155 06:44:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:36.155 06:44:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.155 06:44:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:36.155 06:44:50 -- target/referrals.sh@26 -- # sort 00:07:36.155 06:44:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:36.155 06:44:50 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:36.155 06:44:50 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:36.155 06:44:50 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:36.155 06:44:50 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:36.155 06:44:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.155 06:44:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:36.440 06:44:50 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:36.440 06:44:50 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:36.440 06:44:50 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:36.440 06:44:50 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:36.440 06:44:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.440 06:44:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:36.440 06:44:50 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:36.440 06:44:50 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:36.440 06:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.440 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.440 06:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.440 06:44:50 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:36.440 06:44:50 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:36.440 06:44:50 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:36.440 06:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.440 06:44:50 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:36.440 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.440 06:44:50 -- target/referrals.sh@21 -- # sort 00:07:36.440 06:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.440 06:44:50 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:36.440 06:44:50 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:36.440 06:44:50 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:36.440 06:44:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:36.440 06:44:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:36.440 06:44:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.440 06:44:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:36.440 06:44:50 -- target/referrals.sh@26 -- # sort 00:07:36.440 06:44:50 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:36.698 06:44:50 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:36.698 06:44:50 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:36.698 06:44:50 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:36.698 06:44:50 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:36.698 06:44:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.698 06:44:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:36.698 06:44:50 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:36.698 06:44:50 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:36.698 06:44:50 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:36.698 06:44:50 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:36.698 06:44:50 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.698 06:44:50 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:36.698 06:44:50 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:36.698 06:44:50 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:36.698 06:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.698 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.698 06:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.698 06:44:50 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:36.698 06:44:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.698 06:44:50 -- target/referrals.sh@82 -- # jq length 00:07:36.698 06:44:50 -- common/autotest_common.sh@10 -- # set +x 00:07:36.698 06:44:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.698 06:44:50 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:36.698 06:44:50 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:36.698 06:44:50 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:36.698 06:44:50 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:36.698 06:44:50 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:36.698 06:44:50 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:36.698 06:44:50 -- target/referrals.sh@26 -- # sort 00:07:36.956 06:44:51 -- target/referrals.sh@26 -- # echo 00:07:36.956 06:44:51 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:36.956 06:44:51 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:36.956 06:44:51 -- target/referrals.sh@86 -- # nvmftestfini 00:07:36.956 06:44:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:36.956 06:44:51 -- nvmf/common.sh@116 -- # sync 00:07:36.956 06:44:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:36.956 06:44:51 -- nvmf/common.sh@119 -- # set +e 00:07:36.956 06:44:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:36.956 06:44:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:36.956 rmmod nvme_tcp 00:07:36.956 rmmod nvme_fabrics 00:07:36.956 rmmod nvme_keyring 00:07:36.956 06:44:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:36.956 06:44:51 -- nvmf/common.sh@123 -- # set -e 00:07:36.956 06:44:51 -- nvmf/common.sh@124 -- # return 0 00:07:36.956 06:44:51 -- nvmf/common.sh@477 -- # '[' -n 405674 ']' 00:07:36.956 06:44:51 -- nvmf/common.sh@478 -- # killprocess 405674 00:07:36.956 06:44:51 -- common/autotest_common.sh@926 -- # '[' -z 405674 ']' 00:07:36.957 06:44:51 -- common/autotest_common.sh@930 -- # kill -0 405674 00:07:36.957 06:44:51 -- common/autotest_common.sh@931 -- # uname 00:07:36.957 06:44:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:36.957 06:44:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 405674 00:07:36.957 06:44:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:36.957 06:44:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:36.957 06:44:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 405674' 00:07:36.957 killing process with pid 405674 00:07:36.957 06:44:51 -- common/autotest_common.sh@945 -- # kill 405674 00:07:36.957 06:44:51 -- common/autotest_common.sh@950 -- # wait 405674 00:07:37.216 06:44:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:37.216 06:44:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:37.216 06:44:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:37.216 06:44:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.216 06:44:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:37.216 06:44:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.216 06:44:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.216 06:44:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.754 06:44:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:39.754 00:07:39.754 real 0m7.323s 00:07:39.754 user 0m11.203s 00:07:39.754 sys 0m2.295s 00:07:39.754 06:44:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.754 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.754 ************************************ 00:07:39.754 END TEST nvmf_referrals 00:07:39.754 ************************************ 00:07:39.754 06:44:53 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:39.754 06:44:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:39.754 06:44:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.754 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:07:39.754 ************************************ 00:07:39.754 START TEST nvmf_connect_disconnect 00:07:39.754 ************************************ 00:07:39.754 06:44:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:39.754 * Looking for test storage... 00:07:39.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.754 06:44:53 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.754 06:44:53 -- nvmf/common.sh@7 -- # uname -s 00:07:39.754 06:44:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.754 06:44:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.754 06:44:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.754 06:44:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.754 06:44:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.754 06:44:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.754 06:44:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.754 06:44:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.754 06:44:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.754 06:44:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.754 06:44:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.754 06:44:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.754 06:44:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.754 06:44:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.754 06:44:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.754 06:44:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.754 06:44:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.754 06:44:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.754 06:44:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.754 06:44:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.755 06:44:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.755 06:44:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.755 06:44:53 -- paths/export.sh@5 -- # export PATH 00:07:39.755 06:44:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.755 06:44:53 -- nvmf/common.sh@46 -- # : 0 00:07:39.755 06:44:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:39.755 06:44:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:39.755 06:44:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:39.755 06:44:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.755 06:44:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.755 06:44:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:39.755 06:44:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:39.755 06:44:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:39.755 06:44:53 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:39.755 06:44:53 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:39.755 06:44:53 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:39.755 06:44:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:39.755 06:44:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.755 06:44:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:39.755 06:44:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:39.755 06:44:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:39.755 06:44:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.755 06:44:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.755 06:44:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.755 06:44:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:39.755 06:44:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:39.755 06:44:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:39.755 06:44:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.286 06:44:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:42.286 06:44:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:42.286 06:44:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:42.286 06:44:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:42.286 06:44:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:42.286 06:44:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:42.286 06:44:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:42.286 06:44:56 -- nvmf/common.sh@294 -- # net_devs=() 00:07:42.286 06:44:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:42.286 06:44:56 -- nvmf/common.sh@295 -- # e810=() 00:07:42.286 06:44:56 -- nvmf/common.sh@295 -- # local -ga e810 00:07:42.286 06:44:56 -- nvmf/common.sh@296 -- # x722=() 00:07:42.286 06:44:56 -- nvmf/common.sh@296 -- # local -ga x722 00:07:42.286 06:44:56 -- nvmf/common.sh@297 -- # mlx=() 00:07:42.286 06:44:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:42.286 06:44:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.286 06:44:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.286 06:44:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.286 06:44:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.286 06:44:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.286 06:44:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.286 06:44:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.286 06:44:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.286 06:44:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.286 06:44:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.286 06:44:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.286 06:44:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:42.286 06:44:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:42.286 06:44:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:42.286 06:44:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:42.286 06:44:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:42.286 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:42.286 06:44:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:42.286 06:44:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:42.286 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:42.286 06:44:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:42.286 06:44:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:42.286 06:44:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.286 06:44:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:42.286 06:44:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.286 06:44:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:42.286 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:42.286 06:44:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.286 06:44:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:42.286 06:44:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.286 06:44:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:42.286 06:44:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.286 06:44:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:42.286 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:42.286 06:44:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.286 06:44:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:42.286 06:44:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:42.286 06:44:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:42.286 06:44:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:42.286 06:44:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.286 06:44:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.286 06:44:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.287 06:44:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:42.287 06:44:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.287 06:44:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.287 06:44:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:42.287 06:44:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.287 06:44:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.287 06:44:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:42.287 06:44:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:42.287 06:44:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.287 06:44:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.287 06:44:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.287 06:44:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.287 06:44:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:42.287 06:44:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.287 06:44:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.287 06:44:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.287 06:44:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:42.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:07:42.287 00:07:42.287 --- 10.0.0.2 ping statistics --- 00:07:42.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.287 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:07:42.287 06:44:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:07:42.287 00:07:42.287 --- 10.0.0.1 ping statistics --- 00:07:42.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.287 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:07:42.287 06:44:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.287 06:44:56 -- nvmf/common.sh@410 -- # return 0 00:07:42.287 06:44:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:42.287 06:44:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.287 06:44:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:42.287 06:44:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:42.287 06:44:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.287 06:44:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:42.287 06:44:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:42.287 06:44:56 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:42.287 06:44:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:42.287 06:44:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:42.287 06:44:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.287 06:44:56 -- nvmf/common.sh@469 -- # nvmfpid=408399 00:07:42.287 06:44:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.287 06:44:56 -- nvmf/common.sh@470 -- # waitforlisten 408399 00:07:42.287 06:44:56 -- common/autotest_common.sh@819 -- # '[' -z 408399 ']' 00:07:42.287 06:44:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.287 06:44:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:42.287 06:44:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.287 06:44:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:42.287 06:44:56 -- common/autotest_common.sh@10 -- # set +x 00:07:42.287 [2024-05-15 06:44:56.221763] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:42.287 [2024-05-15 06:44:56.221832] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.287 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.287 [2024-05-15 06:44:56.300031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.287 [2024-05-15 06:44:56.426021] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:42.287 [2024-05-15 06:44:56.426186] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.287 [2024-05-15 06:44:56.426205] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.287 [2024-05-15 06:44:56.426220] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.287 [2024-05-15 06:44:56.426307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.287 [2024-05-15 06:44:56.426362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.287 [2024-05-15 06:44:56.426413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.287 [2024-05-15 06:44:56.426416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.220 06:44:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:43.220 06:44:57 -- common/autotest_common.sh@852 -- # return 0 00:07:43.220 06:44:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:43.220 06:44:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:43.220 06:44:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.220 06:44:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.220 06:44:57 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:43.220 06:44:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.220 06:44:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.220 [2024-05-15 06:44:57.275640] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.220 06:44:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.220 06:44:57 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:43.220 06:44:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.220 06:44:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.220 06:44:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.220 06:44:57 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:43.220 06:44:57 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:43.220 06:44:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.220 06:44:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.220 06:44:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.220 06:44:57 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:43.220 06:44:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.220 06:44:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.220 06:44:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.220 06:44:57 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.220 06:44:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:43.220 06:44:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.220 [2024-05-15 06:44:57.327303] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.220 06:44:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:43.220 06:44:57 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:43.220 06:44:57 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:43.220 06:44:57 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:43.220 06:44:57 -- target/connect_disconnect.sh@34 -- # set +x 00:07:45.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.447 06:48:43 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:29.447 06:48:43 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:29.447 06:48:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:29.447 06:48:43 -- nvmf/common.sh@116 -- # sync 00:11:29.447 06:48:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:29.447 06:48:43 -- nvmf/common.sh@119 -- # set +e 00:11:29.447 06:48:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:29.447 06:48:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:29.447 rmmod nvme_tcp 00:11:29.447 rmmod nvme_fabrics 00:11:29.447 rmmod nvme_keyring 00:11:29.447 06:48:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:29.447 06:48:43 -- nvmf/common.sh@123 -- # set -e 00:11:29.447 06:48:43 -- nvmf/common.sh@124 -- # return 0 00:11:29.447 06:48:43 -- nvmf/common.sh@477 -- # '[' -n 408399 ']' 00:11:29.447 06:48:43 -- nvmf/common.sh@478 -- # killprocess 408399 00:11:29.447 06:48:43 -- common/autotest_common.sh@926 -- # '[' -z 408399 ']' 00:11:29.447 06:48:43 -- common/autotest_common.sh@930 -- # kill -0 408399 00:11:29.447 06:48:43 -- common/autotest_common.sh@931 -- # uname 00:11:29.447 06:48:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:29.447 06:48:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 408399 00:11:29.447 06:48:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:29.447 06:48:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:29.447 06:48:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 408399' 00:11:29.447 killing process with pid 408399 00:11:29.447 06:48:43 -- common/autotest_common.sh@945 -- # kill 408399 00:11:29.447 06:48:43 -- common/autotest_common.sh@950 -- # wait 408399 00:11:30.015 06:48:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:30.015 06:48:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:30.015 06:48:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:30.015 06:48:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:30.015 06:48:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:30.015 06:48:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.015 06:48:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.015 06:48:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.919 06:48:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:31.919 00:11:31.919 real 3m52.567s 00:11:31.919 user 14m44.328s 00:11:31.919 sys 0m30.583s 00:11:31.919 06:48:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.919 06:48:46 -- common/autotest_common.sh@10 -- # set +x 00:11:31.919 ************************************ 00:11:31.919 END TEST nvmf_connect_disconnect 00:11:31.919 ************************************ 00:11:31.919 06:48:46 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:31.919 06:48:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:31.919 06:48:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:31.919 06:48:46 -- common/autotest_common.sh@10 -- # set +x 00:11:31.919 ************************************ 00:11:31.919 START TEST nvmf_multitarget 00:11:31.919 ************************************ 00:11:31.919 06:48:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:31.919 * Looking for test storage... 00:11:31.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.919 06:48:46 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.919 06:48:46 -- nvmf/common.sh@7 -- # uname -s 00:11:31.919 06:48:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.919 06:48:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.919 06:48:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.919 06:48:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.920 06:48:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.920 06:48:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.920 06:48:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.920 06:48:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.920 06:48:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.920 06:48:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.920 06:48:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.920 06:48:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.920 06:48:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.920 06:48:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.920 06:48:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.920 06:48:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.920 06:48:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.920 06:48:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.920 06:48:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.920 06:48:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.920 06:48:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.920 06:48:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.920 06:48:46 -- paths/export.sh@5 -- # export PATH 00:11:31.920 06:48:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.920 06:48:46 -- nvmf/common.sh@46 -- # : 0 00:11:31.920 06:48:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:31.920 06:48:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:31.920 06:48:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:31.920 06:48:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.920 06:48:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.920 06:48:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:31.920 06:48:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:31.920 06:48:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:31.920 06:48:46 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:31.920 06:48:46 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:31.920 06:48:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:31.920 06:48:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.920 06:48:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:31.920 06:48:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:31.920 06:48:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:31.920 06:48:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.920 06:48:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.920 06:48:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.920 06:48:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:31.920 06:48:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:31.920 06:48:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:31.920 06:48:46 -- common/autotest_common.sh@10 -- # set +x 00:11:34.458 06:48:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:34.458 06:48:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:34.458 06:48:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:34.458 06:48:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:34.458 06:48:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:34.458 06:48:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:34.458 06:48:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:34.458 06:48:48 -- nvmf/common.sh@294 -- # net_devs=() 00:11:34.458 06:48:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:34.458 06:48:48 -- nvmf/common.sh@295 -- # e810=() 00:11:34.458 06:48:48 -- nvmf/common.sh@295 -- # local -ga e810 00:11:34.458 06:48:48 -- nvmf/common.sh@296 -- # x722=() 00:11:34.458 06:48:48 -- nvmf/common.sh@296 -- # local -ga x722 00:11:34.458 06:48:48 -- nvmf/common.sh@297 -- # mlx=() 00:11:34.458 06:48:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:34.458 06:48:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.458 06:48:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.458 06:48:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.458 06:48:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.458 06:48:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.458 06:48:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.458 06:48:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.458 06:48:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.458 06:48:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.458 06:48:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.458 06:48:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.458 06:48:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:34.458 06:48:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:34.458 06:48:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:34.458 06:48:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:34.458 06:48:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:34.458 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:34.458 06:48:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:34.458 06:48:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:34.458 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:34.458 06:48:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:34.458 06:48:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:34.458 06:48:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.458 06:48:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:34.458 06:48:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.458 06:48:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:34.458 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:34.458 06:48:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.458 06:48:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:34.458 06:48:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.458 06:48:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:34.458 06:48:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.458 06:48:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:34.458 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:34.458 06:48:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.458 06:48:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:34.458 06:48:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:34.458 06:48:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:34.458 06:48:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:34.458 06:48:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.458 06:48:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.458 06:48:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.458 06:48:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:34.458 06:48:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.458 06:48:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.458 06:48:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:34.458 06:48:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.458 06:48:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.458 06:48:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:34.458 06:48:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:34.458 06:48:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.458 06:48:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.718 06:48:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.718 06:48:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.718 06:48:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:34.718 06:48:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.718 06:48:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.718 06:48:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.718 06:48:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:34.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:11:34.718 00:11:34.718 --- 10.0.0.2 ping statistics --- 00:11:34.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.718 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:34.718 06:48:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:11:34.718 00:11:34.718 --- 10.0.0.1 ping statistics --- 00:11:34.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.718 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:11:34.718 06:48:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.718 06:48:48 -- nvmf/common.sh@410 -- # return 0 00:11:34.718 06:48:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:34.718 06:48:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.718 06:48:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:34.718 06:48:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:34.718 06:48:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.718 06:48:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:34.718 06:48:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:34.718 06:48:48 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:34.718 06:48:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:34.718 06:48:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:34.718 06:48:48 -- common/autotest_common.sh@10 -- # set +x 00:11:34.718 06:48:48 -- nvmf/common.sh@469 -- # nvmfpid=440775 00:11:34.718 06:48:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.718 06:48:48 -- nvmf/common.sh@470 -- # waitforlisten 440775 00:11:34.718 06:48:48 -- common/autotest_common.sh@819 -- # '[' -z 440775 ']' 00:11:34.718 06:48:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.718 06:48:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:34.718 06:48:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.718 06:48:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:34.718 06:48:48 -- common/autotest_common.sh@10 -- # set +x 00:11:34.718 [2024-05-15 06:48:48.840144] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:34.718 [2024-05-15 06:48:48.840238] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.718 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.718 [2024-05-15 06:48:48.922001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.977 [2024-05-15 06:48:49.043612] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:34.977 [2024-05-15 06:48:49.043779] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.977 [2024-05-15 06:48:49.043796] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.977 [2024-05-15 06:48:49.043808] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.977 [2024-05-15 06:48:49.046958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.977 [2024-05-15 06:48:49.047021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.977 [2024-05-15 06:48:49.047070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.977 [2024-05-15 06:48:49.047075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.910 06:48:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:35.910 06:48:49 -- common/autotest_common.sh@852 -- # return 0 00:11:35.910 06:48:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:35.910 06:48:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:35.910 06:48:49 -- common/autotest_common.sh@10 -- # set +x 00:11:35.910 06:48:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.910 06:48:49 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:35.910 06:48:49 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:35.910 06:48:49 -- target/multitarget.sh@21 -- # jq length 00:11:35.910 06:48:49 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:35.910 06:48:49 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:35.910 "nvmf_tgt_1" 00:11:35.910 06:48:50 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:36.168 "nvmf_tgt_2" 00:11:36.168 06:48:50 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:36.168 06:48:50 -- target/multitarget.sh@28 -- # jq length 00:11:36.168 06:48:50 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:36.168 06:48:50 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:36.168 true 00:11:36.168 06:48:50 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:36.426 true 00:11:36.427 06:48:50 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:36.427 06:48:50 -- target/multitarget.sh@35 -- # jq length 00:11:36.427 06:48:50 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:36.427 06:48:50 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:36.427 06:48:50 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:36.427 06:48:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:36.427 06:48:50 -- nvmf/common.sh@116 -- # sync 00:11:36.427 06:48:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:36.427 06:48:50 -- nvmf/common.sh@119 -- # set +e 00:11:36.427 06:48:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:36.427 06:48:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:36.427 rmmod nvme_tcp 00:11:36.427 rmmod nvme_fabrics 00:11:36.427 rmmod nvme_keyring 00:11:36.685 06:48:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:36.685 06:48:50 -- nvmf/common.sh@123 -- # set -e 00:11:36.685 06:48:50 -- nvmf/common.sh@124 -- # return 0 00:11:36.685 06:48:50 -- nvmf/common.sh@477 -- # '[' -n 440775 ']' 00:11:36.685 06:48:50 -- nvmf/common.sh@478 -- # killprocess 440775 00:11:36.685 06:48:50 -- common/autotest_common.sh@926 -- # '[' -z 440775 ']' 00:11:36.685 06:48:50 -- common/autotest_common.sh@930 -- # kill -0 440775 00:11:36.685 06:48:50 -- common/autotest_common.sh@931 -- # uname 00:11:36.685 06:48:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:36.685 06:48:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 440775 00:11:36.685 06:48:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:36.685 06:48:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:36.685 06:48:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 440775' 00:11:36.685 killing process with pid 440775 00:11:36.685 06:48:50 -- common/autotest_common.sh@945 -- # kill 440775 00:11:36.685 06:48:50 -- common/autotest_common.sh@950 -- # wait 440775 00:11:36.944 06:48:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:36.944 06:48:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:36.944 06:48:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:36.944 06:48:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.944 06:48:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:36.944 06:48:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.944 06:48:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.944 06:48:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.851 06:48:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:38.851 00:11:38.851 real 0m6.981s 00:11:38.851 user 0m9.392s 00:11:38.851 sys 0m2.335s 00:11:38.851 06:48:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.851 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:11:38.851 ************************************ 00:11:38.851 END TEST nvmf_multitarget 00:11:38.851 ************************************ 00:11:38.851 06:48:53 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:38.851 06:48:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:38.851 06:48:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:38.851 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:11:38.851 ************************************ 00:11:38.851 START TEST nvmf_rpc 00:11:38.851 ************************************ 00:11:38.851 06:48:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:39.119 * Looking for test storage... 00:11:39.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.119 06:48:53 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.119 06:48:53 -- nvmf/common.sh@7 -- # uname -s 00:11:39.119 06:48:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.119 06:48:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.119 06:48:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.119 06:48:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.119 06:48:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.119 06:48:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.119 06:48:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.119 06:48:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.119 06:48:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.119 06:48:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.119 06:48:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.119 06:48:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.119 06:48:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.119 06:48:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.119 06:48:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.119 06:48:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.119 06:48:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.119 06:48:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.119 06:48:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.119 06:48:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.119 06:48:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.119 06:48:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.119 06:48:53 -- paths/export.sh@5 -- # export PATH 00:11:39.119 06:48:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.119 06:48:53 -- nvmf/common.sh@46 -- # : 0 00:11:39.119 06:48:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:39.119 06:48:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:39.119 06:48:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:39.119 06:48:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.119 06:48:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.119 06:48:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:39.119 06:48:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:39.119 06:48:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:39.119 06:48:53 -- target/rpc.sh@11 -- # loops=5 00:11:39.119 06:48:53 -- target/rpc.sh@23 -- # nvmftestinit 00:11:39.119 06:48:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:39.119 06:48:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.119 06:48:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:39.119 06:48:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:39.119 06:48:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:39.119 06:48:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.119 06:48:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.119 06:48:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.119 06:48:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:39.119 06:48:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:39.119 06:48:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:39.119 06:48:53 -- common/autotest_common.sh@10 -- # set +x 00:11:41.651 06:48:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:41.651 06:48:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:41.651 06:48:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:41.651 06:48:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:41.651 06:48:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:41.651 06:48:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:41.651 06:48:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:41.651 06:48:55 -- nvmf/common.sh@294 -- # net_devs=() 00:11:41.651 06:48:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:41.651 06:48:55 -- nvmf/common.sh@295 -- # e810=() 00:11:41.651 06:48:55 -- nvmf/common.sh@295 -- # local -ga e810 00:11:41.651 06:48:55 -- nvmf/common.sh@296 -- # x722=() 00:11:41.651 06:48:55 -- nvmf/common.sh@296 -- # local -ga x722 00:11:41.651 06:48:55 -- nvmf/common.sh@297 -- # mlx=() 00:11:41.651 06:48:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:41.651 06:48:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.651 06:48:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.651 06:48:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.651 06:48:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.651 06:48:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.651 06:48:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.651 06:48:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.651 06:48:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.651 06:48:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.651 06:48:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.651 06:48:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.651 06:48:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:41.651 06:48:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:41.651 06:48:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:41.651 06:48:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:41.651 06:48:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:41.651 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:41.651 06:48:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:41.651 06:48:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:41.651 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:41.651 06:48:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.651 06:48:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:41.652 06:48:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:41.652 06:48:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:41.652 06:48:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:41.652 06:48:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:41.652 06:48:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.652 06:48:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:41.652 06:48:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.652 06:48:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:41.652 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:41.652 06:48:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.652 06:48:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:41.652 06:48:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.652 06:48:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:41.652 06:48:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.652 06:48:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:41.652 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:41.652 06:48:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.652 06:48:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:41.652 06:48:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:41.652 06:48:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:41.652 06:48:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:41.652 06:48:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:41.652 06:48:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.652 06:48:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.652 06:48:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.652 06:48:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:41.652 06:48:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.652 06:48:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.652 06:48:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:41.652 06:48:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.652 06:48:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.652 06:48:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:41.652 06:48:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:41.652 06:48:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.652 06:48:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.652 06:48:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.652 06:48:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.652 06:48:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:41.652 06:48:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.652 06:48:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.652 06:48:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.652 06:48:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:41.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:11:41.652 00:11:41.652 --- 10.0.0.2 ping statistics --- 00:11:41.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.652 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:11:41.652 06:48:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:11:41.910 00:11:41.910 --- 10.0.0.1 ping statistics --- 00:11:41.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.910 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:41.910 06:48:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.910 06:48:55 -- nvmf/common.sh@410 -- # return 0 00:11:41.910 06:48:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:41.910 06:48:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.910 06:48:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:41.910 06:48:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:41.910 06:48:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.910 06:48:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:41.910 06:48:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:41.910 06:48:55 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:41.910 06:48:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:41.910 06:48:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:41.910 06:48:55 -- common/autotest_common.sh@10 -- # set +x 00:11:41.910 06:48:55 -- nvmf/common.sh@469 -- # nvmfpid=443408 00:11:41.910 06:48:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.910 06:48:55 -- nvmf/common.sh@470 -- # waitforlisten 443408 00:11:41.910 06:48:55 -- common/autotest_common.sh@819 -- # '[' -z 443408 ']' 00:11:41.910 06:48:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.910 06:48:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:41.910 06:48:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.910 06:48:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:41.910 06:48:55 -- common/autotest_common.sh@10 -- # set +x 00:11:41.910 [2024-05-15 06:48:55.952305] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:41.910 [2024-05-15 06:48:55.952387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.910 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.910 [2024-05-15 06:48:56.032869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.168 [2024-05-15 06:48:56.152566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:42.168 [2024-05-15 06:48:56.152724] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.168 [2024-05-15 06:48:56.152744] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.169 [2024-05-15 06:48:56.152758] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.169 [2024-05-15 06:48:56.152842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.169 [2024-05-15 06:48:56.152899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.169 [2024-05-15 06:48:56.152957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.169 [2024-05-15 06:48:56.152961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.734 06:48:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:42.734 06:48:56 -- common/autotest_common.sh@852 -- # return 0 00:11:42.734 06:48:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:42.734 06:48:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:42.734 06:48:56 -- common/autotest_common.sh@10 -- # set +x 00:11:42.734 06:48:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.734 06:48:56 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:42.734 06:48:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.734 06:48:56 -- common/autotest_common.sh@10 -- # set +x 00:11:42.734 06:48:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.734 06:48:56 -- target/rpc.sh@26 -- # stats='{ 00:11:42.734 "tick_rate": 2700000000, 00:11:42.734 "poll_groups": [ 00:11:42.734 { 00:11:42.734 "name": "nvmf_tgt_poll_group_0", 00:11:42.734 "admin_qpairs": 0, 00:11:42.734 "io_qpairs": 0, 00:11:42.734 "current_admin_qpairs": 0, 00:11:42.734 "current_io_qpairs": 0, 00:11:42.734 "pending_bdev_io": 0, 00:11:42.734 "completed_nvme_io": 0, 00:11:42.734 "transports": [] 00:11:42.734 }, 00:11:42.734 { 00:11:42.734 "name": "nvmf_tgt_poll_group_1", 00:11:42.734 "admin_qpairs": 0, 00:11:42.734 "io_qpairs": 0, 00:11:42.734 "current_admin_qpairs": 0, 00:11:42.734 "current_io_qpairs": 0, 00:11:42.734 "pending_bdev_io": 0, 00:11:42.734 "completed_nvme_io": 0, 00:11:42.734 "transports": [] 00:11:42.734 }, 00:11:42.734 { 00:11:42.734 "name": "nvmf_tgt_poll_group_2", 00:11:42.734 "admin_qpairs": 0, 00:11:42.734 "io_qpairs": 0, 00:11:42.734 "current_admin_qpairs": 0, 00:11:42.734 "current_io_qpairs": 0, 00:11:42.734 "pending_bdev_io": 0, 00:11:42.734 "completed_nvme_io": 0, 00:11:42.734 "transports": [] 00:11:42.734 }, 00:11:42.734 { 00:11:42.734 "name": "nvmf_tgt_poll_group_3", 00:11:42.734 "admin_qpairs": 0, 00:11:42.734 "io_qpairs": 0, 00:11:42.734 "current_admin_qpairs": 0, 00:11:42.734 "current_io_qpairs": 0, 00:11:42.734 "pending_bdev_io": 0, 00:11:42.734 "completed_nvme_io": 0, 00:11:42.734 "transports": [] 00:11:42.734 } 00:11:42.734 ] 00:11:42.734 }' 00:11:42.734 06:48:56 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:42.734 06:48:56 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:42.734 06:48:56 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:42.734 06:48:56 -- target/rpc.sh@15 -- # wc -l 00:11:42.734 06:48:56 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:42.734 06:48:56 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:42.993 06:48:56 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:42.993 06:48:56 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.993 06:48:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.993 06:48:56 -- common/autotest_common.sh@10 -- # set +x 00:11:42.993 [2024-05-15 06:48:56.991550] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.993 06:48:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.993 06:48:56 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:42.993 06:48:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.993 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:11:42.993 06:48:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.993 06:48:57 -- target/rpc.sh@33 -- # stats='{ 00:11:42.993 "tick_rate": 2700000000, 00:11:42.993 "poll_groups": [ 00:11:42.993 { 00:11:42.993 "name": "nvmf_tgt_poll_group_0", 00:11:42.993 "admin_qpairs": 0, 00:11:42.993 "io_qpairs": 0, 00:11:42.993 "current_admin_qpairs": 0, 00:11:42.993 "current_io_qpairs": 0, 00:11:42.993 "pending_bdev_io": 0, 00:11:42.993 "completed_nvme_io": 0, 00:11:42.993 "transports": [ 00:11:42.993 { 00:11:42.993 "trtype": "TCP" 00:11:42.993 } 00:11:42.993 ] 00:11:42.993 }, 00:11:42.993 { 00:11:42.993 "name": "nvmf_tgt_poll_group_1", 00:11:42.993 "admin_qpairs": 0, 00:11:42.993 "io_qpairs": 0, 00:11:42.993 "current_admin_qpairs": 0, 00:11:42.993 "current_io_qpairs": 0, 00:11:42.994 "pending_bdev_io": 0, 00:11:42.994 "completed_nvme_io": 0, 00:11:42.994 "transports": [ 00:11:42.994 { 00:11:42.994 "trtype": "TCP" 00:11:42.994 } 00:11:42.994 ] 00:11:42.994 }, 00:11:42.994 { 00:11:42.994 "name": "nvmf_tgt_poll_group_2", 00:11:42.994 "admin_qpairs": 0, 00:11:42.994 "io_qpairs": 0, 00:11:42.994 "current_admin_qpairs": 0, 00:11:42.994 "current_io_qpairs": 0, 00:11:42.994 "pending_bdev_io": 0, 00:11:42.994 "completed_nvme_io": 0, 00:11:42.994 "transports": [ 00:11:42.994 { 00:11:42.994 "trtype": "TCP" 00:11:42.994 } 00:11:42.994 ] 00:11:42.994 }, 00:11:42.994 { 00:11:42.994 "name": "nvmf_tgt_poll_group_3", 00:11:42.994 "admin_qpairs": 0, 00:11:42.994 "io_qpairs": 0, 00:11:42.994 "current_admin_qpairs": 0, 00:11:42.994 "current_io_qpairs": 0, 00:11:42.994 "pending_bdev_io": 0, 00:11:42.994 "completed_nvme_io": 0, 00:11:42.994 "transports": [ 00:11:42.994 { 00:11:42.994 "trtype": "TCP" 00:11:42.994 } 00:11:42.994 ] 00:11:42.994 } 00:11:42.994 ] 00:11:42.994 }' 00:11:42.994 06:48:57 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:42.994 06:48:57 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:42.994 06:48:57 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:42.994 06:48:57 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:42.994 06:48:57 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:42.994 06:48:57 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:42.994 06:48:57 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:42.994 06:48:57 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:42.994 06:48:57 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:42.994 06:48:57 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:42.994 06:48:57 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:42.994 06:48:57 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:42.994 06:48:57 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:42.994 06:48:57 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:42.994 06:48:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.994 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:11:42.994 Malloc1 00:11:42.994 06:48:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.994 06:48:57 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:42.994 06:48:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.994 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:11:42.994 06:48:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.994 06:48:57 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.994 06:48:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.994 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:11:42.994 06:48:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.994 06:48:57 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:42.994 06:48:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.994 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:11:42.994 06:48:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.994 06:48:57 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.994 06:48:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.994 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:11:42.994 [2024-05-15 06:48:57.145172] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.994 06:48:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.994 06:48:57 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:42.994 06:48:57 -- common/autotest_common.sh@640 -- # local es=0 00:11:42.994 06:48:57 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:42.994 06:48:57 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:42.994 06:48:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:42.994 06:48:57 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:42.994 06:48:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:42.994 06:48:57 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:42.994 06:48:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:42.994 06:48:57 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:42.994 06:48:57 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:42.994 06:48:57 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:42.994 [2024-05-15 06:48:57.167994] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:42.994 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:42.994 could not add new controller: failed to write to nvme-fabrics device 00:11:42.994 06:48:57 -- common/autotest_common.sh@643 -- # es=1 00:11:42.994 06:48:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:42.994 06:48:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:42.994 06:48:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:42.994 06:48:57 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:42.994 06:48:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:42.994 06:48:57 -- common/autotest_common.sh@10 -- # set +x 00:11:42.994 06:48:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:42.994 06:48:57 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.560 06:48:57 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.560 06:48:57 -- common/autotest_common.sh@1177 -- # local i=0 00:11:43.560 06:48:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.560 06:48:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:43.561 06:48:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:46.119 06:48:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:46.119 06:48:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:46.119 06:48:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.119 06:48:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:46.119 06:48:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.119 06:48:59 -- common/autotest_common.sh@1187 -- # return 0 00:11:46.119 06:48:59 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.119 06:48:59 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.119 06:48:59 -- common/autotest_common.sh@1198 -- # local i=0 00:11:46.119 06:48:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:46.119 06:48:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.119 06:48:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:46.119 06:48:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.119 06:48:59 -- common/autotest_common.sh@1210 -- # return 0 00:11:46.119 06:48:59 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:46.119 06:48:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.119 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:11:46.119 06:48:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.119 06:48:59 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.119 06:48:59 -- common/autotest_common.sh@640 -- # local es=0 00:11:46.119 06:48:59 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.119 06:48:59 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:46.119 06:48:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.119 06:48:59 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:46.119 06:48:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.119 06:48:59 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:46.119 06:48:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.119 06:48:59 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:46.119 06:48:59 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:46.119 06:48:59 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.119 [2024-05-15 06:48:59.928839] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:46.119 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:46.119 could not add new controller: failed to write to nvme-fabrics device 00:11:46.119 06:48:59 -- common/autotest_common.sh@643 -- # es=1 00:11:46.119 06:48:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:46.119 06:48:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:46.119 06:48:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:46.119 06:48:59 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:46.119 06:48:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.119 06:48:59 -- common/autotest_common.sh@10 -- # set +x 00:11:46.119 06:48:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.119 06:48:59 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.377 06:49:00 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.377 06:49:00 -- common/autotest_common.sh@1177 -- # local i=0 00:11:46.377 06:49:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.377 06:49:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:46.377 06:49:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:48.904 06:49:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:48.904 06:49:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:48.904 06:49:02 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.904 06:49:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:48.904 06:49:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.904 06:49:02 -- common/autotest_common.sh@1187 -- # return 0 00:11:48.904 06:49:02 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.904 06:49:02 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.904 06:49:02 -- common/autotest_common.sh@1198 -- # local i=0 00:11:48.904 06:49:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:48.904 06:49:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.904 06:49:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:48.904 06:49:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.904 06:49:02 -- common/autotest_common.sh@1210 -- # return 0 00:11:48.904 06:49:02 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.904 06:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.904 06:49:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.904 06:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.904 06:49:02 -- target/rpc.sh@81 -- # seq 1 5 00:11:48.904 06:49:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:48.904 06:49:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:48.904 06:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.904 06:49:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.904 06:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.904 06:49:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.904 06:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.904 06:49:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.904 [2024-05-15 06:49:02.684302] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.904 06:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.904 06:49:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:48.904 06:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.904 06:49:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.904 06:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.904 06:49:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:48.904 06:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.904 06:49:02 -- common/autotest_common.sh@10 -- # set +x 00:11:48.904 06:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.904 06:49:02 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.161 06:49:03 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:49.161 06:49:03 -- common/autotest_common.sh@1177 -- # local i=0 00:11:49.161 06:49:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.161 06:49:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:49.161 06:49:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:51.691 06:49:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:51.691 06:49:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:51.691 06:49:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.691 06:49:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:51.691 06:49:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.691 06:49:05 -- common/autotest_common.sh@1187 -- # return 0 00:11:51.691 06:49:05 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.691 06:49:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.691 06:49:05 -- common/autotest_common.sh@1198 -- # local i=0 00:11:51.691 06:49:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:51.691 06:49:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.691 06:49:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:51.692 06:49:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.692 06:49:05 -- common/autotest_common.sh@1210 -- # return 0 00:11:51.692 06:49:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.692 06:49:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.692 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:11:51.692 06:49:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.692 06:49:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.692 06:49:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.692 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:11:51.692 06:49:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.692 06:49:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:51.692 06:49:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:51.692 06:49:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.692 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:11:51.692 06:49:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.692 06:49:05 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.692 06:49:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.692 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:11:51.692 [2024-05-15 06:49:05.456587] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.692 06:49:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.692 06:49:05 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:51.692 06:49:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.692 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:11:51.692 06:49:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.692 06:49:05 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:51.692 06:49:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.692 06:49:05 -- common/autotest_common.sh@10 -- # set +x 00:11:51.692 06:49:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.692 06:49:05 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.954 06:49:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.954 06:49:06 -- common/autotest_common.sh@1177 -- # local i=0 00:11:51.954 06:49:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.954 06:49:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:51.954 06:49:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:53.853 06:49:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:53.853 06:49:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:53.853 06:49:08 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.853 06:49:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:53.853 06:49:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.853 06:49:08 -- common/autotest_common.sh@1187 -- # return 0 00:11:53.853 06:49:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.111 06:49:08 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.111 06:49:08 -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.111 06:49:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:54.111 06:49:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.111 06:49:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:54.111 06:49:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.111 06:49:08 -- common/autotest_common.sh@1210 -- # return 0 00:11:54.111 06:49:08 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:54.111 06:49:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:54.111 06:49:08 -- common/autotest_common.sh@10 -- # set +x 00:11:54.111 06:49:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:54.111 06:49:08 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.111 06:49:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:54.111 06:49:08 -- common/autotest_common.sh@10 -- # set +x 00:11:54.111 06:49:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:54.111 06:49:08 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:54.111 06:49:08 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.111 06:49:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:54.111 06:49:08 -- common/autotest_common.sh@10 -- # set +x 00:11:54.111 06:49:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:54.111 06:49:08 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.111 06:49:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:54.111 06:49:08 -- common/autotest_common.sh@10 -- # set +x 00:11:54.111 [2024-05-15 06:49:08.214343] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.111 06:49:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:54.111 06:49:08 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:54.111 06:49:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:54.111 06:49:08 -- common/autotest_common.sh@10 -- # set +x 00:11:54.111 06:49:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:54.111 06:49:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.111 06:49:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:54.111 06:49:08 -- common/autotest_common.sh@10 -- # set +x 00:11:54.111 06:49:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:54.111 06:49:08 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.677 06:49:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.677 06:49:08 -- common/autotest_common.sh@1177 -- # local i=0 00:11:54.677 06:49:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.677 06:49:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:54.677 06:49:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:57.206 06:49:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:57.206 06:49:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:57.206 06:49:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.206 06:49:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:57.206 06:49:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.206 06:49:10 -- common/autotest_common.sh@1187 -- # return 0 00:11:57.206 06:49:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.206 06:49:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.206 06:49:10 -- common/autotest_common.sh@1198 -- # local i=0 00:11:57.206 06:49:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:57.206 06:49:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.206 06:49:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:57.206 06:49:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.206 06:49:10 -- common/autotest_common.sh@1210 -- # return 0 00:11:57.206 06:49:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.206 06:49:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:57.206 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:11:57.206 06:49:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:57.206 06:49:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.206 06:49:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:57.206 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:11:57.206 06:49:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:57.206 06:49:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:57.206 06:49:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:57.206 06:49:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:57.206 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:11:57.206 06:49:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:57.206 06:49:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.206 06:49:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:57.206 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:11:57.206 [2024-05-15 06:49:10.988111] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.206 06:49:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:57.206 06:49:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:57.206 06:49:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:57.206 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:11:57.206 06:49:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:57.206 06:49:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:57.206 06:49:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:57.206 06:49:10 -- common/autotest_common.sh@10 -- # set +x 00:11:57.206 06:49:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:57.206 06:49:11 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.464 06:49:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.464 06:49:11 -- common/autotest_common.sh@1177 -- # local i=0 00:11:57.464 06:49:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.464 06:49:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:57.464 06:49:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:59.992 06:49:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:59.992 06:49:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:59.992 06:49:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.992 06:49:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:59.992 06:49:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.992 06:49:13 -- common/autotest_common.sh@1187 -- # return 0 00:11:59.992 06:49:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.992 06:49:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.992 06:49:13 -- common/autotest_common.sh@1198 -- # local i=0 00:11:59.992 06:49:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:59.992 06:49:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.992 06:49:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:59.992 06:49:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.992 06:49:13 -- common/autotest_common.sh@1210 -- # return 0 00:11:59.992 06:49:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.992 06:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.992 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 06:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.992 06:49:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.992 06:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.992 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 06:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.992 06:49:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:59.992 06:49:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.992 06:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.992 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 06:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.992 06:49:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.992 06:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.992 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 [2024-05-15 06:49:13.799104] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.992 06:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.992 06:49:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:59.992 06:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.992 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 06:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.992 06:49:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.992 06:49:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:59.992 06:49:13 -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 06:49:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:59.992 06:49:13 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.249 06:49:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.249 06:49:14 -- common/autotest_common.sh@1177 -- # local i=0 00:12:00.249 06:49:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.249 06:49:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:00.249 06:49:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:02.775 06:49:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:02.775 06:49:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:02.775 06:49:16 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.775 06:49:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:02.775 06:49:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.775 06:49:16 -- common/autotest_common.sh@1187 -- # return 0 00:12:02.775 06:49:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.775 06:49:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.775 06:49:16 -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.775 06:49:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:02.775 06:49:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.775 06:49:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:02.775 06:49:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.775 06:49:16 -- common/autotest_common.sh@1210 -- # return 0 00:12:02.775 06:49:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@99 -- # seq 1 5 00:12:02.775 06:49:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:02.775 06:49:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 [2024-05-15 06:49:16.502478] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:02.775 06:49:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 [2024-05-15 06:49:16.550528] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.775 06:49:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.775 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.775 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.775 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:02.776 06:49:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 [2024-05-15 06:49:16.598685] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:02.776 06:49:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 [2024-05-15 06:49:16.646831] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:02.776 06:49:16 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 [2024-05-15 06:49:16.695034] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:02.776 06:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:02.776 06:49:16 -- common/autotest_common.sh@10 -- # set +x 00:12:02.776 06:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:02.776 06:49:16 -- target/rpc.sh@110 -- # stats='{ 00:12:02.776 "tick_rate": 2700000000, 00:12:02.776 "poll_groups": [ 00:12:02.776 { 00:12:02.776 "name": "nvmf_tgt_poll_group_0", 00:12:02.776 "admin_qpairs": 2, 00:12:02.776 "io_qpairs": 84, 00:12:02.776 "current_admin_qpairs": 0, 00:12:02.776 "current_io_qpairs": 0, 00:12:02.776 "pending_bdev_io": 0, 00:12:02.776 "completed_nvme_io": 136, 00:12:02.776 "transports": [ 00:12:02.776 { 00:12:02.776 "trtype": "TCP" 00:12:02.776 } 00:12:02.776 ] 00:12:02.776 }, 00:12:02.776 { 00:12:02.776 "name": "nvmf_tgt_poll_group_1", 00:12:02.776 "admin_qpairs": 2, 00:12:02.776 "io_qpairs": 84, 00:12:02.776 "current_admin_qpairs": 0, 00:12:02.776 "current_io_qpairs": 0, 00:12:02.776 "pending_bdev_io": 0, 00:12:02.776 "completed_nvme_io": 231, 00:12:02.776 "transports": [ 00:12:02.776 { 00:12:02.776 "trtype": "TCP" 00:12:02.776 } 00:12:02.776 ] 00:12:02.776 }, 00:12:02.776 { 00:12:02.776 "name": "nvmf_tgt_poll_group_2", 00:12:02.776 "admin_qpairs": 1, 00:12:02.776 "io_qpairs": 84, 00:12:02.776 "current_admin_qpairs": 0, 00:12:02.776 "current_io_qpairs": 0, 00:12:02.776 "pending_bdev_io": 0, 00:12:02.776 "completed_nvme_io": 184, 00:12:02.776 "transports": [ 00:12:02.776 { 00:12:02.776 "trtype": "TCP" 00:12:02.776 } 00:12:02.776 ] 00:12:02.776 }, 00:12:02.776 { 00:12:02.776 "name": "nvmf_tgt_poll_group_3", 00:12:02.776 "admin_qpairs": 2, 00:12:02.776 "io_qpairs": 84, 00:12:02.776 "current_admin_qpairs": 0, 00:12:02.776 "current_io_qpairs": 0, 00:12:02.776 "pending_bdev_io": 0, 00:12:02.776 "completed_nvme_io": 135, 00:12:02.776 "transports": [ 00:12:02.776 { 00:12:02.776 "trtype": "TCP" 00:12:02.776 } 00:12:02.776 ] 00:12:02.776 } 00:12:02.776 ] 00:12:02.776 }' 00:12:02.776 06:49:16 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:02.776 06:49:16 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:02.776 06:49:16 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:02.776 06:49:16 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:02.776 06:49:16 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:02.776 06:49:16 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:02.776 06:49:16 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:02.776 06:49:16 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:02.776 06:49:16 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:02.776 06:49:16 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:02.776 06:49:16 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:02.776 06:49:16 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:02.776 06:49:16 -- target/rpc.sh@123 -- # nvmftestfini 00:12:02.776 06:49:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:02.776 06:49:16 -- nvmf/common.sh@116 -- # sync 00:12:02.776 06:49:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:02.776 06:49:16 -- nvmf/common.sh@119 -- # set +e 00:12:02.776 06:49:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:02.776 06:49:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:02.776 rmmod nvme_tcp 00:12:02.776 rmmod nvme_fabrics 00:12:02.776 rmmod nvme_keyring 00:12:02.776 06:49:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:02.776 06:49:16 -- nvmf/common.sh@123 -- # set -e 00:12:02.776 06:49:16 -- nvmf/common.sh@124 -- # return 0 00:12:02.776 06:49:16 -- nvmf/common.sh@477 -- # '[' -n 443408 ']' 00:12:02.776 06:49:16 -- nvmf/common.sh@478 -- # killprocess 443408 00:12:02.776 06:49:16 -- common/autotest_common.sh@926 -- # '[' -z 443408 ']' 00:12:02.776 06:49:16 -- common/autotest_common.sh@930 -- # kill -0 443408 00:12:02.776 06:49:16 -- common/autotest_common.sh@931 -- # uname 00:12:02.776 06:49:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:02.776 06:49:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 443408 00:12:02.777 06:49:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:02.777 06:49:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:02.777 06:49:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 443408' 00:12:02.777 killing process with pid 443408 00:12:02.777 06:49:16 -- common/autotest_common.sh@945 -- # kill 443408 00:12:02.777 06:49:16 -- common/autotest_common.sh@950 -- # wait 443408 00:12:03.036 06:49:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:03.036 06:49:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:03.036 06:49:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:03.036 06:49:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:03.036 06:49:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:03.036 06:49:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.036 06:49:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.036 06:49:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.575 06:49:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:05.575 00:12:05.575 real 0m26.227s 00:12:05.575 user 1m23.695s 00:12:05.575 sys 0m4.289s 00:12:05.575 06:49:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.575 06:49:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.575 ************************************ 00:12:05.575 END TEST nvmf_rpc 00:12:05.575 ************************************ 00:12:05.575 06:49:19 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:05.575 06:49:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:05.575 06:49:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:05.575 06:49:19 -- common/autotest_common.sh@10 -- # set +x 00:12:05.575 ************************************ 00:12:05.575 START TEST nvmf_invalid 00:12:05.575 ************************************ 00:12:05.575 06:49:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:05.575 * Looking for test storage... 00:12:05.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.575 06:49:19 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.575 06:49:19 -- nvmf/common.sh@7 -- # uname -s 00:12:05.575 06:49:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.575 06:49:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.575 06:49:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.575 06:49:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.575 06:49:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.575 06:49:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.575 06:49:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.575 06:49:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.575 06:49:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.575 06:49:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.575 06:49:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:05.575 06:49:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:05.575 06:49:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.575 06:49:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.575 06:49:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.575 06:49:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.575 06:49:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.575 06:49:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.575 06:49:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.575 06:49:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.575 06:49:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.575 06:49:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.575 06:49:19 -- paths/export.sh@5 -- # export PATH 00:12:05.575 06:49:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.575 06:49:19 -- nvmf/common.sh@46 -- # : 0 00:12:05.575 06:49:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:05.575 06:49:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:05.575 06:49:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:05.575 06:49:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.575 06:49:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.575 06:49:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:05.575 06:49:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:05.575 06:49:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:05.575 06:49:19 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:05.575 06:49:19 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.575 06:49:19 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:05.575 06:49:19 -- target/invalid.sh@14 -- # target=foobar 00:12:05.575 06:49:19 -- target/invalid.sh@16 -- # RANDOM=0 00:12:05.575 06:49:19 -- target/invalid.sh@34 -- # nvmftestinit 00:12:05.575 06:49:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:05.575 06:49:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.575 06:49:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:05.575 06:49:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:05.575 06:49:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:05.575 06:49:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.575 06:49:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:05.575 06:49:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.575 06:49:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:05.575 06:49:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:05.575 06:49:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:05.575 06:49:19 -- common/autotest_common.sh@10 -- # set +x 00:12:08.143 06:49:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:08.144 06:49:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:08.144 06:49:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:08.144 06:49:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:08.144 06:49:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:08.144 06:49:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:08.144 06:49:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:08.144 06:49:21 -- nvmf/common.sh@294 -- # net_devs=() 00:12:08.144 06:49:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:08.144 06:49:21 -- nvmf/common.sh@295 -- # e810=() 00:12:08.144 06:49:21 -- nvmf/common.sh@295 -- # local -ga e810 00:12:08.144 06:49:21 -- nvmf/common.sh@296 -- # x722=() 00:12:08.144 06:49:21 -- nvmf/common.sh@296 -- # local -ga x722 00:12:08.144 06:49:21 -- nvmf/common.sh@297 -- # mlx=() 00:12:08.144 06:49:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:08.144 06:49:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.144 06:49:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.144 06:49:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.144 06:49:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.144 06:49:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.144 06:49:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.144 06:49:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.144 06:49:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.144 06:49:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.144 06:49:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.144 06:49:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.144 06:49:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:08.144 06:49:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:08.144 06:49:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:08.144 06:49:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:08.144 06:49:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:08.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:08.144 06:49:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:08.144 06:49:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:08.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:08.144 06:49:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:08.144 06:49:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:08.144 06:49:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.144 06:49:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:08.144 06:49:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.144 06:49:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:08.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:08.144 06:49:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.144 06:49:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:08.144 06:49:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.144 06:49:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:08.144 06:49:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.144 06:49:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:08.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:08.144 06:49:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.144 06:49:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:08.144 06:49:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:08.144 06:49:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:08.144 06:49:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.144 06:49:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.144 06:49:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.144 06:49:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:08.144 06:49:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.144 06:49:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.144 06:49:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:08.144 06:49:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.144 06:49:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.144 06:49:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:08.144 06:49:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:08.144 06:49:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.144 06:49:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.144 06:49:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.144 06:49:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.144 06:49:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:08.144 06:49:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.144 06:49:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.144 06:49:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.144 06:49:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:08.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:12:08.144 00:12:08.144 --- 10.0.0.2 ping statistics --- 00:12:08.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.144 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:12:08.144 06:49:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:12:08.144 00:12:08.144 --- 10.0.0.1 ping statistics --- 00:12:08.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.144 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:12:08.144 06:49:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.144 06:49:21 -- nvmf/common.sh@410 -- # return 0 00:12:08.144 06:49:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:08.144 06:49:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.144 06:49:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:08.144 06:49:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.144 06:49:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:08.144 06:49:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:08.144 06:49:21 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:08.144 06:49:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:08.144 06:49:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:08.144 06:49:21 -- common/autotest_common.sh@10 -- # set +x 00:12:08.144 06:49:21 -- nvmf/common.sh@469 -- # nvmfpid=448409 00:12:08.144 06:49:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.144 06:49:21 -- nvmf/common.sh@470 -- # waitforlisten 448409 00:12:08.144 06:49:21 -- common/autotest_common.sh@819 -- # '[' -z 448409 ']' 00:12:08.144 06:49:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.144 06:49:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:08.144 06:49:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.144 06:49:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:08.144 06:49:21 -- common/autotest_common.sh@10 -- # set +x 00:12:08.144 [2024-05-15 06:49:21.968720] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:08.144 [2024-05-15 06:49:21.968810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.144 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.144 [2024-05-15 06:49:22.056405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.144 [2024-05-15 06:49:22.175821] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:08.144 [2024-05-15 06:49:22.175980] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.144 [2024-05-15 06:49:22.176001] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.144 [2024-05-15 06:49:22.176015] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.144 [2024-05-15 06:49:22.176074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.144 [2024-05-15 06:49:22.176130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.144 [2024-05-15 06:49:22.176183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.144 [2024-05-15 06:49:22.176187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.711 06:49:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:08.711 06:49:22 -- common/autotest_common.sh@852 -- # return 0 00:12:08.711 06:49:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:08.711 06:49:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:08.711 06:49:22 -- common/autotest_common.sh@10 -- # set +x 00:12:08.970 06:49:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.970 06:49:22 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:08.970 06:49:22 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32712 00:12:08.970 [2024-05-15 06:49:23.202136] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:09.227 06:49:23 -- target/invalid.sh@40 -- # out='request: 00:12:09.227 { 00:12:09.227 "nqn": "nqn.2016-06.io.spdk:cnode32712", 00:12:09.227 "tgt_name": "foobar", 00:12:09.227 "method": "nvmf_create_subsystem", 00:12:09.227 "req_id": 1 00:12:09.227 } 00:12:09.227 Got JSON-RPC error response 00:12:09.227 response: 00:12:09.227 { 00:12:09.227 "code": -32603, 00:12:09.227 "message": "Unable to find target foobar" 00:12:09.227 }' 00:12:09.227 06:49:23 -- target/invalid.sh@41 -- # [[ request: 00:12:09.227 { 00:12:09.227 "nqn": "nqn.2016-06.io.spdk:cnode32712", 00:12:09.227 "tgt_name": "foobar", 00:12:09.227 "method": "nvmf_create_subsystem", 00:12:09.227 "req_id": 1 00:12:09.227 } 00:12:09.227 Got JSON-RPC error response 00:12:09.227 response: 00:12:09.227 { 00:12:09.227 "code": -32603, 00:12:09.227 "message": "Unable to find target foobar" 00:12:09.227 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:09.227 06:49:23 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:09.227 06:49:23 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2777 00:12:09.227 [2024-05-15 06:49:23.434973] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2777: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:09.227 06:49:23 -- target/invalid.sh@45 -- # out='request: 00:12:09.227 { 00:12:09.227 "nqn": "nqn.2016-06.io.spdk:cnode2777", 00:12:09.227 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:09.227 "method": "nvmf_create_subsystem", 00:12:09.227 "req_id": 1 00:12:09.227 } 00:12:09.227 Got JSON-RPC error response 00:12:09.227 response: 00:12:09.227 { 00:12:09.227 "code": -32602, 00:12:09.227 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:09.227 }' 00:12:09.227 06:49:23 -- target/invalid.sh@46 -- # [[ request: 00:12:09.227 { 00:12:09.227 "nqn": "nqn.2016-06.io.spdk:cnode2777", 00:12:09.227 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:09.227 "method": "nvmf_create_subsystem", 00:12:09.227 "req_id": 1 00:12:09.227 } 00:12:09.227 Got JSON-RPC error response 00:12:09.227 response: 00:12:09.227 { 00:12:09.227 "code": -32602, 00:12:09.227 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:09.227 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:09.227 06:49:23 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:09.227 06:49:23 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32026 00:12:09.484 [2024-05-15 06:49:23.675721] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32026: invalid model number 'SPDK_Controller' 00:12:09.484 06:49:23 -- target/invalid.sh@50 -- # out='request: 00:12:09.484 { 00:12:09.484 "nqn": "nqn.2016-06.io.spdk:cnode32026", 00:12:09.484 "model_number": "SPDK_Controller\u001f", 00:12:09.484 "method": "nvmf_create_subsystem", 00:12:09.484 "req_id": 1 00:12:09.484 } 00:12:09.484 Got JSON-RPC error response 00:12:09.484 response: 00:12:09.484 { 00:12:09.484 "code": -32602, 00:12:09.484 "message": "Invalid MN SPDK_Controller\u001f" 00:12:09.484 }' 00:12:09.484 06:49:23 -- target/invalid.sh@51 -- # [[ request: 00:12:09.484 { 00:12:09.484 "nqn": "nqn.2016-06.io.spdk:cnode32026", 00:12:09.484 "model_number": "SPDK_Controller\u001f", 00:12:09.484 "method": "nvmf_create_subsystem", 00:12:09.484 "req_id": 1 00:12:09.484 } 00:12:09.484 Got JSON-RPC error response 00:12:09.484 response: 00:12:09.484 { 00:12:09.484 "code": -32602, 00:12:09.484 "message": "Invalid MN SPDK_Controller\u001f" 00:12:09.484 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:09.484 06:49:23 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:09.484 06:49:23 -- target/invalid.sh@19 -- # local length=21 ll 00:12:09.484 06:49:23 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:09.484 06:49:23 -- target/invalid.sh@21 -- # local chars 00:12:09.484 06:49:23 -- target/invalid.sh@22 -- # local string 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # printf %x 37 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # string+=% 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # printf %x 43 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # string+=+ 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # printf %x 72 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # string+=H 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # printf %x 92 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # string+='\' 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # printf %x 112 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:09.484 06:49:23 -- target/invalid.sh@25 -- # string+=p 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.484 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.741 06:49:23 -- target/invalid.sh@25 -- # printf %x 72 00:12:09.741 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:09.741 06:49:23 -- target/invalid.sh@25 -- # string+=H 00:12:09.741 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.741 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.741 06:49:23 -- target/invalid.sh@25 -- # printf %x 124 00:12:09.741 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:09.741 06:49:23 -- target/invalid.sh@25 -- # string+='|' 00:12:09.741 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 70 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=F 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 85 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=U 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 119 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=w 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 39 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=\' 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 50 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=2 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 87 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=W 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 80 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=P 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 41 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=')' 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 110 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=n 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 114 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=r 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 114 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=r 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 127 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=$'\177' 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 58 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=: 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # printf %x 49 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:09.742 06:49:23 -- target/invalid.sh@25 -- # string+=1 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:09.742 06:49:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:09.742 06:49:23 -- target/invalid.sh@28 -- # [[ % == \- ]] 00:12:09.742 06:49:23 -- target/invalid.sh@31 -- # echo '%+H\pH|FUw'\''2WP)nrr:1' 00:12:09.742 06:49:23 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '%+H\pH|FUw'\''2WP)nrr:1' nqn.2016-06.io.spdk:cnode12825 00:12:10.001 [2024-05-15 06:49:23.984747] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12825: invalid serial number '%+H\pH|FUw'2WP)nrr:1' 00:12:10.001 06:49:24 -- target/invalid.sh@54 -- # out='request: 00:12:10.001 { 00:12:10.001 "nqn": "nqn.2016-06.io.spdk:cnode12825", 00:12:10.001 "serial_number": "%+H\\pH|FUw'\''2WP)nrr\u007f:1", 00:12:10.001 "method": "nvmf_create_subsystem", 00:12:10.001 "req_id": 1 00:12:10.001 } 00:12:10.001 Got JSON-RPC error response 00:12:10.001 response: 00:12:10.001 { 00:12:10.001 "code": -32602, 00:12:10.001 "message": "Invalid SN %+H\\pH|FUw'\''2WP)nrr\u007f:1" 00:12:10.001 }' 00:12:10.001 06:49:24 -- target/invalid.sh@55 -- # [[ request: 00:12:10.001 { 00:12:10.001 "nqn": "nqn.2016-06.io.spdk:cnode12825", 00:12:10.001 "serial_number": "%+H\\pH|FUw'2WP)nrr\u007f:1", 00:12:10.001 "method": "nvmf_create_subsystem", 00:12:10.001 "req_id": 1 00:12:10.001 } 00:12:10.001 Got JSON-RPC error response 00:12:10.001 response: 00:12:10.001 { 00:12:10.001 "code": -32602, 00:12:10.001 "message": "Invalid SN %+H\\pH|FUw'2WP)nrr\u007f:1" 00:12:10.001 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:10.001 06:49:24 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:10.001 06:49:24 -- target/invalid.sh@19 -- # local length=41 ll 00:12:10.001 06:49:24 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:10.001 06:49:24 -- target/invalid.sh@21 -- # local chars 00:12:10.001 06:49:24 -- target/invalid.sh@22 -- # local string 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 75 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=K 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 126 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+='~' 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 123 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+='{' 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 66 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=B 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 115 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=s 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 34 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+='"' 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 83 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=S 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 115 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=s 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 93 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=']' 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 84 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=T 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 48 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=0 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 111 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=o 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 56 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=8 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 78 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=N 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 57 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=9 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 110 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=n 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 112 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=p 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 45 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=- 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 69 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=E 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.001 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # printf %x 52 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:10.001 06:49:24 -- target/invalid.sh@25 -- # string+=4 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 79 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=O 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 37 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=% 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 33 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+='!' 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 56 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=8 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 35 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+='#' 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 76 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=L 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 47 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=/ 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 74 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=J 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 53 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=5 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 37 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=% 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 80 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=P 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 88 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=X 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 120 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=x 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 106 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=j 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 107 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=k 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 112 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=p 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 69 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=E 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 105 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=i 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 54 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=6 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 76 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=L 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # printf %x 44 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:10.002 06:49:24 -- target/invalid.sh@25 -- # string+=, 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:10.002 06:49:24 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:10.002 06:49:24 -- target/invalid.sh@28 -- # [[ K == \- ]] 00:12:10.002 06:49:24 -- target/invalid.sh@31 -- # echo 'K~{Bs"Ss]T0o8N9np-E4O%!8#L/J5%PXxjkpEi6L,' 00:12:10.002 06:49:24 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'K~{Bs"Ss]T0o8N9np-E4O%!8#L/J5%PXxjkpEi6L,' nqn.2016-06.io.spdk:cnode4198 00:12:10.260 [2024-05-15 06:49:24.406171] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4198: invalid model number 'K~{Bs"Ss]T0o8N9np-E4O%!8#L/J5%PXxjkpEi6L,' 00:12:10.260 06:49:24 -- target/invalid.sh@58 -- # out='request: 00:12:10.260 { 00:12:10.260 "nqn": "nqn.2016-06.io.spdk:cnode4198", 00:12:10.260 "model_number": "K~{Bs\"Ss]T0o8N9np-E4O%!8#L/J5%PXxjkpEi6L,", 00:12:10.260 "method": "nvmf_create_subsystem", 00:12:10.260 "req_id": 1 00:12:10.260 } 00:12:10.260 Got JSON-RPC error response 00:12:10.260 response: 00:12:10.260 { 00:12:10.260 "code": -32602, 00:12:10.260 "message": "Invalid MN K~{Bs\"Ss]T0o8N9np-E4O%!8#L/J5%PXxjkpEi6L," 00:12:10.260 }' 00:12:10.260 06:49:24 -- target/invalid.sh@59 -- # [[ request: 00:12:10.260 { 00:12:10.260 "nqn": "nqn.2016-06.io.spdk:cnode4198", 00:12:10.260 "model_number": "K~{Bs\"Ss]T0o8N9np-E4O%!8#L/J5%PXxjkpEi6L,", 00:12:10.260 "method": "nvmf_create_subsystem", 00:12:10.260 "req_id": 1 00:12:10.260 } 00:12:10.260 Got JSON-RPC error response 00:12:10.260 response: 00:12:10.260 { 00:12:10.260 "code": -32602, 00:12:10.260 "message": "Invalid MN K~{Bs\"Ss]T0o8N9np-E4O%!8#L/J5%PXxjkpEi6L," 00:12:10.260 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:10.260 06:49:24 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:10.518 [2024-05-15 06:49:24.639027] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.518 06:49:24 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:10.776 06:49:24 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:10.776 06:49:24 -- target/invalid.sh@67 -- # echo '' 00:12:10.776 06:49:24 -- target/invalid.sh@67 -- # head -n 1 00:12:10.776 06:49:24 -- target/invalid.sh@67 -- # IP= 00:12:10.776 06:49:24 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:11.034 [2024-05-15 06:49:25.104575] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:11.034 06:49:25 -- target/invalid.sh@69 -- # out='request: 00:12:11.034 { 00:12:11.034 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:11.034 "listen_address": { 00:12:11.034 "trtype": "tcp", 00:12:11.034 "traddr": "", 00:12:11.034 "trsvcid": "4421" 00:12:11.034 }, 00:12:11.034 "method": "nvmf_subsystem_remove_listener", 00:12:11.034 "req_id": 1 00:12:11.034 } 00:12:11.034 Got JSON-RPC error response 00:12:11.034 response: 00:12:11.035 { 00:12:11.035 "code": -32602, 00:12:11.035 "message": "Invalid parameters" 00:12:11.035 }' 00:12:11.035 06:49:25 -- target/invalid.sh@70 -- # [[ request: 00:12:11.035 { 00:12:11.035 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:11.035 "listen_address": { 00:12:11.035 "trtype": "tcp", 00:12:11.035 "traddr": "", 00:12:11.035 "trsvcid": "4421" 00:12:11.035 }, 00:12:11.035 "method": "nvmf_subsystem_remove_listener", 00:12:11.035 "req_id": 1 00:12:11.035 } 00:12:11.035 Got JSON-RPC error response 00:12:11.035 response: 00:12:11.035 { 00:12:11.035 "code": -32602, 00:12:11.035 "message": "Invalid parameters" 00:12:11.035 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:11.035 06:49:25 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29243 -i 0 00:12:11.293 [2024-05-15 06:49:25.361456] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29243: invalid cntlid range [0-65519] 00:12:11.293 06:49:25 -- target/invalid.sh@73 -- # out='request: 00:12:11.293 { 00:12:11.293 "nqn": "nqn.2016-06.io.spdk:cnode29243", 00:12:11.293 "min_cntlid": 0, 00:12:11.293 "method": "nvmf_create_subsystem", 00:12:11.293 "req_id": 1 00:12:11.293 } 00:12:11.293 Got JSON-RPC error response 00:12:11.293 response: 00:12:11.293 { 00:12:11.293 "code": -32602, 00:12:11.293 "message": "Invalid cntlid range [0-65519]" 00:12:11.293 }' 00:12:11.293 06:49:25 -- target/invalid.sh@74 -- # [[ request: 00:12:11.293 { 00:12:11.293 "nqn": "nqn.2016-06.io.spdk:cnode29243", 00:12:11.293 "min_cntlid": 0, 00:12:11.293 "method": "nvmf_create_subsystem", 00:12:11.293 "req_id": 1 00:12:11.293 } 00:12:11.293 Got JSON-RPC error response 00:12:11.293 response: 00:12:11.293 { 00:12:11.293 "code": -32602, 00:12:11.293 "message": "Invalid cntlid range [0-65519]" 00:12:11.293 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:11.293 06:49:25 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11375 -i 65520 00:12:11.551 [2024-05-15 06:49:25.602246] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11375: invalid cntlid range [65520-65519] 00:12:11.551 06:49:25 -- target/invalid.sh@75 -- # out='request: 00:12:11.551 { 00:12:11.551 "nqn": "nqn.2016-06.io.spdk:cnode11375", 00:12:11.551 "min_cntlid": 65520, 00:12:11.551 "method": "nvmf_create_subsystem", 00:12:11.551 "req_id": 1 00:12:11.551 } 00:12:11.551 Got JSON-RPC error response 00:12:11.551 response: 00:12:11.551 { 00:12:11.551 "code": -32602, 00:12:11.551 "message": "Invalid cntlid range [65520-65519]" 00:12:11.551 }' 00:12:11.551 06:49:25 -- target/invalid.sh@76 -- # [[ request: 00:12:11.551 { 00:12:11.551 "nqn": "nqn.2016-06.io.spdk:cnode11375", 00:12:11.551 "min_cntlid": 65520, 00:12:11.551 "method": "nvmf_create_subsystem", 00:12:11.551 "req_id": 1 00:12:11.551 } 00:12:11.551 Got JSON-RPC error response 00:12:11.551 response: 00:12:11.551 { 00:12:11.551 "code": -32602, 00:12:11.551 "message": "Invalid cntlid range [65520-65519]" 00:12:11.551 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:11.551 06:49:25 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4714 -I 0 00:12:11.809 [2024-05-15 06:49:25.847056] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4714: invalid cntlid range [1-0] 00:12:11.809 06:49:25 -- target/invalid.sh@77 -- # out='request: 00:12:11.809 { 00:12:11.809 "nqn": "nqn.2016-06.io.spdk:cnode4714", 00:12:11.809 "max_cntlid": 0, 00:12:11.809 "method": "nvmf_create_subsystem", 00:12:11.809 "req_id": 1 00:12:11.809 } 00:12:11.809 Got JSON-RPC error response 00:12:11.809 response: 00:12:11.809 { 00:12:11.809 "code": -32602, 00:12:11.809 "message": "Invalid cntlid range [1-0]" 00:12:11.809 }' 00:12:11.809 06:49:25 -- target/invalid.sh@78 -- # [[ request: 00:12:11.809 { 00:12:11.809 "nqn": "nqn.2016-06.io.spdk:cnode4714", 00:12:11.809 "max_cntlid": 0, 00:12:11.809 "method": "nvmf_create_subsystem", 00:12:11.809 "req_id": 1 00:12:11.809 } 00:12:11.809 Got JSON-RPC error response 00:12:11.809 response: 00:12:11.809 { 00:12:11.809 "code": -32602, 00:12:11.809 "message": "Invalid cntlid range [1-0]" 00:12:11.809 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:11.809 06:49:25 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22449 -I 65520 00:12:12.066 [2024-05-15 06:49:26.079842] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22449: invalid cntlid range [1-65520] 00:12:12.066 06:49:26 -- target/invalid.sh@79 -- # out='request: 00:12:12.066 { 00:12:12.066 "nqn": "nqn.2016-06.io.spdk:cnode22449", 00:12:12.066 "max_cntlid": 65520, 00:12:12.066 "method": "nvmf_create_subsystem", 00:12:12.066 "req_id": 1 00:12:12.066 } 00:12:12.066 Got JSON-RPC error response 00:12:12.066 response: 00:12:12.066 { 00:12:12.066 "code": -32602, 00:12:12.066 "message": "Invalid cntlid range [1-65520]" 00:12:12.066 }' 00:12:12.066 06:49:26 -- target/invalid.sh@80 -- # [[ request: 00:12:12.066 { 00:12:12.066 "nqn": "nqn.2016-06.io.spdk:cnode22449", 00:12:12.066 "max_cntlid": 65520, 00:12:12.067 "method": "nvmf_create_subsystem", 00:12:12.067 "req_id": 1 00:12:12.067 } 00:12:12.067 Got JSON-RPC error response 00:12:12.067 response: 00:12:12.067 { 00:12:12.067 "code": -32602, 00:12:12.067 "message": "Invalid cntlid range [1-65520]" 00:12:12.067 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.067 06:49:26 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6150 -i 6 -I 5 00:12:12.325 [2024-05-15 06:49:26.316658] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6150: invalid cntlid range [6-5] 00:12:12.325 06:49:26 -- target/invalid.sh@83 -- # out='request: 00:12:12.325 { 00:12:12.325 "nqn": "nqn.2016-06.io.spdk:cnode6150", 00:12:12.325 "min_cntlid": 6, 00:12:12.325 "max_cntlid": 5, 00:12:12.325 "method": "nvmf_create_subsystem", 00:12:12.325 "req_id": 1 00:12:12.325 } 00:12:12.325 Got JSON-RPC error response 00:12:12.325 response: 00:12:12.325 { 00:12:12.325 "code": -32602, 00:12:12.325 "message": "Invalid cntlid range [6-5]" 00:12:12.325 }' 00:12:12.325 06:49:26 -- target/invalid.sh@84 -- # [[ request: 00:12:12.325 { 00:12:12.325 "nqn": "nqn.2016-06.io.spdk:cnode6150", 00:12:12.325 "min_cntlid": 6, 00:12:12.325 "max_cntlid": 5, 00:12:12.325 "method": "nvmf_create_subsystem", 00:12:12.325 "req_id": 1 00:12:12.325 } 00:12:12.325 Got JSON-RPC error response 00:12:12.325 response: 00:12:12.325 { 00:12:12.325 "code": -32602, 00:12:12.325 "message": "Invalid cntlid range [6-5]" 00:12:12.325 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:12.325 06:49:26 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:12.325 06:49:26 -- target/invalid.sh@87 -- # out='request: 00:12:12.325 { 00:12:12.325 "name": "foobar", 00:12:12.325 "method": "nvmf_delete_target", 00:12:12.325 "req_id": 1 00:12:12.325 } 00:12:12.325 Got JSON-RPC error response 00:12:12.325 response: 00:12:12.325 { 00:12:12.325 "code": -32602, 00:12:12.325 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:12.325 }' 00:12:12.325 06:49:26 -- target/invalid.sh@88 -- # [[ request: 00:12:12.325 { 00:12:12.325 "name": "foobar", 00:12:12.325 "method": "nvmf_delete_target", 00:12:12.325 "req_id": 1 00:12:12.325 } 00:12:12.325 Got JSON-RPC error response 00:12:12.325 response: 00:12:12.325 { 00:12:12.325 "code": -32602, 00:12:12.325 "message": "The specified target doesn't exist, cannot delete it." 00:12:12.325 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:12.325 06:49:26 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:12.325 06:49:26 -- target/invalid.sh@91 -- # nvmftestfini 00:12:12.325 06:49:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:12.325 06:49:26 -- nvmf/common.sh@116 -- # sync 00:12:12.325 06:49:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:12.325 06:49:26 -- nvmf/common.sh@119 -- # set +e 00:12:12.325 06:49:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:12.325 06:49:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:12.325 rmmod nvme_tcp 00:12:12.325 rmmod nvme_fabrics 00:12:12.325 rmmod nvme_keyring 00:12:12.325 06:49:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:12.325 06:49:26 -- nvmf/common.sh@123 -- # set -e 00:12:12.325 06:49:26 -- nvmf/common.sh@124 -- # return 0 00:12:12.325 06:49:26 -- nvmf/common.sh@477 -- # '[' -n 448409 ']' 00:12:12.325 06:49:26 -- nvmf/common.sh@478 -- # killprocess 448409 00:12:12.325 06:49:26 -- common/autotest_common.sh@926 -- # '[' -z 448409 ']' 00:12:12.325 06:49:26 -- common/autotest_common.sh@930 -- # kill -0 448409 00:12:12.325 06:49:26 -- common/autotest_common.sh@931 -- # uname 00:12:12.325 06:49:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:12.325 06:49:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 448409 00:12:12.325 06:49:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:12.325 06:49:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:12.325 06:49:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 448409' 00:12:12.325 killing process with pid 448409 00:12:12.325 06:49:26 -- common/autotest_common.sh@945 -- # kill 448409 00:12:12.325 06:49:26 -- common/autotest_common.sh@950 -- # wait 448409 00:12:12.584 06:49:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:12.584 06:49:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:12.584 06:49:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:12.584 06:49:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:12.584 06:49:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:12.584 06:49:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.584 06:49:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.584 06:49:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.125 06:49:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:15.125 00:12:15.125 real 0m9.544s 00:12:15.125 user 0m22.162s 00:12:15.125 sys 0m2.750s 00:12:15.125 06:49:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.125 06:49:28 -- common/autotest_common.sh@10 -- # set +x 00:12:15.125 ************************************ 00:12:15.125 END TEST nvmf_invalid 00:12:15.125 ************************************ 00:12:15.125 06:49:28 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:15.125 06:49:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:15.125 06:49:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:15.125 06:49:28 -- common/autotest_common.sh@10 -- # set +x 00:12:15.125 ************************************ 00:12:15.125 START TEST nvmf_abort 00:12:15.125 ************************************ 00:12:15.125 06:49:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:15.125 * Looking for test storage... 00:12:15.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.125 06:49:28 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.125 06:49:28 -- nvmf/common.sh@7 -- # uname -s 00:12:15.125 06:49:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.125 06:49:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.125 06:49:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.125 06:49:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.125 06:49:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.125 06:49:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.125 06:49:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.125 06:49:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.125 06:49:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.125 06:49:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.125 06:49:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:15.125 06:49:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:15.125 06:49:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.125 06:49:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.125 06:49:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.125 06:49:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.125 06:49:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.125 06:49:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.125 06:49:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.125 06:49:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.125 06:49:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.125 06:49:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.125 06:49:28 -- paths/export.sh@5 -- # export PATH 00:12:15.125 06:49:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.125 06:49:28 -- nvmf/common.sh@46 -- # : 0 00:12:15.125 06:49:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:15.125 06:49:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:15.125 06:49:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:15.125 06:49:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.125 06:49:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.125 06:49:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:15.125 06:49:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:15.125 06:49:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:15.125 06:49:28 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.125 06:49:28 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:15.125 06:49:28 -- target/abort.sh@14 -- # nvmftestinit 00:12:15.125 06:49:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:15.125 06:49:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.125 06:49:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:15.125 06:49:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:15.125 06:49:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:15.125 06:49:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.125 06:49:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.125 06:49:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.125 06:49:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:15.125 06:49:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:15.125 06:49:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:15.125 06:49:28 -- common/autotest_common.sh@10 -- # set +x 00:12:17.657 06:49:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:17.657 06:49:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:17.657 06:49:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:17.657 06:49:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:17.657 06:49:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:17.657 06:49:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:17.657 06:49:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:17.657 06:49:31 -- nvmf/common.sh@294 -- # net_devs=() 00:12:17.657 06:49:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:17.657 06:49:31 -- nvmf/common.sh@295 -- # e810=() 00:12:17.657 06:49:31 -- nvmf/common.sh@295 -- # local -ga e810 00:12:17.657 06:49:31 -- nvmf/common.sh@296 -- # x722=() 00:12:17.657 06:49:31 -- nvmf/common.sh@296 -- # local -ga x722 00:12:17.657 06:49:31 -- nvmf/common.sh@297 -- # mlx=() 00:12:17.657 06:49:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:17.657 06:49:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.657 06:49:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.657 06:49:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.657 06:49:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.657 06:49:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.657 06:49:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.657 06:49:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.657 06:49:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.657 06:49:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.657 06:49:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.657 06:49:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.657 06:49:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:17.657 06:49:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:17.657 06:49:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:17.657 06:49:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:17.657 06:49:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:17.657 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:17.657 06:49:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:17.657 06:49:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:17.657 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:17.657 06:49:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:17.657 06:49:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:17.657 06:49:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.657 06:49:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:17.657 06:49:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.657 06:49:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:17.657 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:17.657 06:49:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.657 06:49:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:17.657 06:49:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.657 06:49:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:17.657 06:49:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.657 06:49:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:17.657 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:17.657 06:49:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.657 06:49:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:17.657 06:49:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:17.657 06:49:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:17.657 06:49:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.657 06:49:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.657 06:49:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.657 06:49:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:17.657 06:49:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.657 06:49:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.657 06:49:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:17.657 06:49:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.657 06:49:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.657 06:49:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:17.657 06:49:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:17.657 06:49:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.657 06:49:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.657 06:49:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.657 06:49:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.657 06:49:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:17.657 06:49:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.657 06:49:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.657 06:49:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.657 06:49:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:17.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:12:17.657 00:12:17.657 --- 10.0.0.2 ping statistics --- 00:12:17.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.657 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:12:17.657 06:49:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:12:17.657 00:12:17.657 --- 10.0.0.1 ping statistics --- 00:12:17.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.657 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:12:17.657 06:49:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.657 06:49:31 -- nvmf/common.sh@410 -- # return 0 00:12:17.657 06:49:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:17.657 06:49:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.657 06:49:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:17.657 06:49:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.657 06:49:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:17.657 06:49:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:17.657 06:49:31 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:17.657 06:49:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:17.657 06:49:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:17.657 06:49:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.657 06:49:31 -- nvmf/common.sh@469 -- # nvmfpid=451392 00:12:17.657 06:49:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:17.657 06:49:31 -- nvmf/common.sh@470 -- # waitforlisten 451392 00:12:17.657 06:49:31 -- common/autotest_common.sh@819 -- # '[' -z 451392 ']' 00:12:17.657 06:49:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.657 06:49:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:17.657 06:49:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.657 06:49:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:17.657 06:49:31 -- common/autotest_common.sh@10 -- # set +x 00:12:17.657 [2024-05-15 06:49:31.694184] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:17.657 [2024-05-15 06:49:31.694292] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.657 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.657 [2024-05-15 06:49:31.775429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:17.916 [2024-05-15 06:49:31.893936] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:17.916 [2024-05-15 06:49:31.894107] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.916 [2024-05-15 06:49:31.894127] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.916 [2024-05-15 06:49:31.894141] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.916 [2024-05-15 06:49:31.894222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.916 [2024-05-15 06:49:31.894285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.916 [2024-05-15 06:49:31.894289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.482 06:49:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:18.482 06:49:32 -- common/autotest_common.sh@852 -- # return 0 00:12:18.482 06:49:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:18.482 06:49:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:18.482 06:49:32 -- common/autotest_common.sh@10 -- # set +x 00:12:18.482 06:49:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.482 06:49:32 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:18.482 06:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:18.482 06:49:32 -- common/autotest_common.sh@10 -- # set +x 00:12:18.482 [2024-05-15 06:49:32.668491] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.482 06:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:18.482 06:49:32 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:18.482 06:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:18.482 06:49:32 -- common/autotest_common.sh@10 -- # set +x 00:12:18.482 Malloc0 00:12:18.482 06:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:18.482 06:49:32 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:18.482 06:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:18.482 06:49:32 -- common/autotest_common.sh@10 -- # set +x 00:12:18.740 Delay0 00:12:18.740 06:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:18.740 06:49:32 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:18.740 06:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:18.740 06:49:32 -- common/autotest_common.sh@10 -- # set +x 00:12:18.740 06:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:18.740 06:49:32 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:18.740 06:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:18.740 06:49:32 -- common/autotest_common.sh@10 -- # set +x 00:12:18.740 06:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:18.740 06:49:32 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:18.740 06:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:18.740 06:49:32 -- common/autotest_common.sh@10 -- # set +x 00:12:18.740 [2024-05-15 06:49:32.738791] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.740 06:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:18.740 06:49:32 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:18.740 06:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:18.740 06:49:32 -- common/autotest_common.sh@10 -- # set +x 00:12:18.740 06:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:18.740 06:49:32 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:18.740 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.740 [2024-05-15 06:49:32.835113] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:21.266 Initializing NVMe Controllers 00:12:21.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:21.266 controller IO queue size 128 less than required 00:12:21.266 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:21.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:21.266 Initialization complete. Launching workers. 00:12:21.266 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32425 00:12:21.266 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32486, failed to submit 62 00:12:21.266 success 32425, unsuccess 61, failed 0 00:12:21.266 06:49:34 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:21.266 06:49:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:21.266 06:49:34 -- common/autotest_common.sh@10 -- # set +x 00:12:21.266 06:49:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:21.266 06:49:34 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:21.266 06:49:34 -- target/abort.sh@38 -- # nvmftestfini 00:12:21.266 06:49:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:21.266 06:49:34 -- nvmf/common.sh@116 -- # sync 00:12:21.266 06:49:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:21.266 06:49:34 -- nvmf/common.sh@119 -- # set +e 00:12:21.266 06:49:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:21.266 06:49:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:21.266 rmmod nvme_tcp 00:12:21.266 rmmod nvme_fabrics 00:12:21.266 rmmod nvme_keyring 00:12:21.266 06:49:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:21.266 06:49:34 -- nvmf/common.sh@123 -- # set -e 00:12:21.266 06:49:34 -- nvmf/common.sh@124 -- # return 0 00:12:21.266 06:49:34 -- nvmf/common.sh@477 -- # '[' -n 451392 ']' 00:12:21.266 06:49:34 -- nvmf/common.sh@478 -- # killprocess 451392 00:12:21.266 06:49:34 -- common/autotest_common.sh@926 -- # '[' -z 451392 ']' 00:12:21.266 06:49:34 -- common/autotest_common.sh@930 -- # kill -0 451392 00:12:21.266 06:49:34 -- common/autotest_common.sh@931 -- # uname 00:12:21.266 06:49:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:21.266 06:49:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 451392 00:12:21.266 06:49:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:21.266 06:49:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:21.266 06:49:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 451392' 00:12:21.266 killing process with pid 451392 00:12:21.266 06:49:35 -- common/autotest_common.sh@945 -- # kill 451392 00:12:21.266 06:49:35 -- common/autotest_common.sh@950 -- # wait 451392 00:12:21.266 06:49:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:21.266 06:49:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:21.266 06:49:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:21.266 06:49:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.266 06:49:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:21.266 06:49:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.266 06:49:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.266 06:49:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.171 06:49:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:23.171 00:12:23.171 real 0m8.489s 00:12:23.171 user 0m12.774s 00:12:23.171 sys 0m2.942s 00:12:23.171 06:49:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:23.171 06:49:37 -- common/autotest_common.sh@10 -- # set +x 00:12:23.171 ************************************ 00:12:23.171 END TEST nvmf_abort 00:12:23.171 ************************************ 00:12:23.171 06:49:37 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:23.171 06:49:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:23.171 06:49:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:23.171 06:49:37 -- common/autotest_common.sh@10 -- # set +x 00:12:23.171 ************************************ 00:12:23.171 START TEST nvmf_ns_hotplug_stress 00:12:23.171 ************************************ 00:12:23.171 06:49:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:23.429 * Looking for test storage... 00:12:23.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.429 06:49:37 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.429 06:49:37 -- nvmf/common.sh@7 -- # uname -s 00:12:23.429 06:49:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.429 06:49:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.429 06:49:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.429 06:49:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.429 06:49:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.429 06:49:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.429 06:49:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.429 06:49:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.429 06:49:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.429 06:49:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.429 06:49:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.429 06:49:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.429 06:49:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.429 06:49:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.429 06:49:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.429 06:49:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.429 06:49:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.429 06:49:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.429 06:49:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.429 06:49:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.429 06:49:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.429 06:49:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.429 06:49:37 -- paths/export.sh@5 -- # export PATH 00:12:23.429 06:49:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.429 06:49:37 -- nvmf/common.sh@46 -- # : 0 00:12:23.429 06:49:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:23.429 06:49:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:23.429 06:49:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:23.429 06:49:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.429 06:49:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.429 06:49:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:23.429 06:49:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:23.429 06:49:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:23.429 06:49:37 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.429 06:49:37 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:12:23.429 06:49:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:23.429 06:49:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.429 06:49:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:23.429 06:49:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:23.429 06:49:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:23.429 06:49:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.429 06:49:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.429 06:49:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.430 06:49:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:23.430 06:49:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:23.430 06:49:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:23.430 06:49:37 -- common/autotest_common.sh@10 -- # set +x 00:12:26.009 06:49:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:26.009 06:49:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:26.009 06:49:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:26.009 06:49:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:26.009 06:49:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:26.009 06:49:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:26.009 06:49:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:26.009 06:49:40 -- nvmf/common.sh@294 -- # net_devs=() 00:12:26.009 06:49:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:26.009 06:49:40 -- nvmf/common.sh@295 -- # e810=() 00:12:26.009 06:49:40 -- nvmf/common.sh@295 -- # local -ga e810 00:12:26.009 06:49:40 -- nvmf/common.sh@296 -- # x722=() 00:12:26.009 06:49:40 -- nvmf/common.sh@296 -- # local -ga x722 00:12:26.009 06:49:40 -- nvmf/common.sh@297 -- # mlx=() 00:12:26.009 06:49:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:26.009 06:49:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.009 06:49:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.009 06:49:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.009 06:49:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.009 06:49:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.009 06:49:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.009 06:49:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.009 06:49:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.009 06:49:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.009 06:49:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.009 06:49:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.009 06:49:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:26.009 06:49:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:26.009 06:49:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:26.009 06:49:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:26.009 06:49:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:26.009 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:26.009 06:49:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:26.009 06:49:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:26.009 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:26.009 06:49:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:26.009 06:49:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:26.009 06:49:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.009 06:49:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:26.009 06:49:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.009 06:49:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:26.009 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:26.009 06:49:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.009 06:49:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:26.009 06:49:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.009 06:49:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:26.009 06:49:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.009 06:49:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:26.009 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:26.009 06:49:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.009 06:49:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:26.009 06:49:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:26.009 06:49:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:26.009 06:49:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.009 06:49:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.009 06:49:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.009 06:49:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:26.009 06:49:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.009 06:49:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.009 06:49:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:26.009 06:49:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.009 06:49:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.009 06:49:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:26.009 06:49:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:26.009 06:49:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.009 06:49:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.009 06:49:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.009 06:49:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.009 06:49:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:26.009 06:49:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.009 06:49:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.009 06:49:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.009 06:49:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:26.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:12:26.009 00:12:26.009 --- 10.0.0.2 ping statistics --- 00:12:26.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.009 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:12:26.009 06:49:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:12:26.009 00:12:26.009 --- 10.0.0.1 ping statistics --- 00:12:26.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.009 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:12:26.009 06:49:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.009 06:49:40 -- nvmf/common.sh@410 -- # return 0 00:12:26.009 06:49:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:26.009 06:49:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.009 06:49:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:26.009 06:49:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.009 06:49:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:26.009 06:49:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:26.009 06:49:40 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:12:26.009 06:49:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:26.009 06:49:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:26.009 06:49:40 -- common/autotest_common.sh@10 -- # set +x 00:12:26.009 06:49:40 -- nvmf/common.sh@469 -- # nvmfpid=454198 00:12:26.009 06:49:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:26.009 06:49:40 -- nvmf/common.sh@470 -- # waitforlisten 454198 00:12:26.009 06:49:40 -- common/autotest_common.sh@819 -- # '[' -z 454198 ']' 00:12:26.009 06:49:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.009 06:49:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:26.009 06:49:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.009 06:49:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:26.009 06:49:40 -- common/autotest_common.sh@10 -- # set +x 00:12:26.009 [2024-05-15 06:49:40.227499] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:26.009 [2024-05-15 06:49:40.227599] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.268 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.268 [2024-05-15 06:49:40.310235] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:26.268 [2024-05-15 06:49:40.427174] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:26.268 [2024-05-15 06:49:40.427349] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.268 [2024-05-15 06:49:40.427369] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.268 [2024-05-15 06:49:40.427393] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.268 [2024-05-15 06:49:40.427477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.268 [2024-05-15 06:49:40.427534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.268 [2024-05-15 06:49:40.427538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.201 06:49:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:27.201 06:49:41 -- common/autotest_common.sh@852 -- # return 0 00:12:27.201 06:49:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:27.201 06:49:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:27.201 06:49:41 -- common/autotest_common.sh@10 -- # set +x 00:12:27.201 06:49:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.201 06:49:41 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:12:27.201 06:49:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:27.201 [2024-05-15 06:49:41.403052] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.201 06:49:41 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:27.459 06:49:41 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.717 [2024-05-15 06:49:41.889592] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.717 06:49:41 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:27.975 06:49:42 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:28.233 Malloc0 00:12:28.233 06:49:42 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:28.491 Delay0 00:12:28.491 06:49:42 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:28.748 06:49:42 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:29.006 NULL1 00:12:29.006 06:49:43 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:29.263 06:49:43 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=454632 00:12:29.263 06:49:43 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:29.263 06:49:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:29.263 06:49:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.263 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.630 Read completed with error (sct=0, sc=11) 00:12:30.631 06:49:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:30.631 06:49:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:12:30.631 06:49:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:30.887 true 00:12:30.887 06:49:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:30.887 06:49:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.817 06:49:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.074 06:49:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:12:32.074 06:49:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:32.074 true 00:12:32.331 06:49:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:32.331 06:49:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.588 06:49:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:32.588 06:49:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:12:32.588 06:49:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:32.845 true 00:12:32.845 06:49:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:32.845 06:49:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:33.103 06:49:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:33.361 06:49:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:12:33.361 06:49:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:33.619 true 00:12:33.619 06:49:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:33.619 06:49:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:34.991 06:49:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:34.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:34.991 06:49:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:12:34.991 06:49:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:35.249 true 00:12:35.249 06:49:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:35.249 06:49:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.506 06:49:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:35.764 06:49:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:12:35.764 06:49:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:36.022 true 00:12:36.022 06:49:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:36.022 06:49:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.956 06:49:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:36.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:36.956 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:37.214 06:49:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:12:37.214 06:49:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:37.472 true 00:12:37.472 06:49:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:37.472 06:49:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.730 06:49:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:37.988 06:49:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:12:37.988 06:49:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:38.245 true 00:12:38.245 06:49:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:38.245 06:49:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.177 06:49:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:39.434 06:49:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:12:39.434 06:49:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:39.691 true 00:12:39.691 06:49:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:39.691 06:49:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.949 06:49:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:40.207 06:49:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:12:40.207 06:49:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:40.503 true 00:12:40.503 06:49:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:40.503 06:49:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.435 06:49:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:41.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:41.693 06:49:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:12:41.693 06:49:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:41.693 true 00:12:41.693 06:49:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:41.693 06:49:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.950 06:49:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:42.207 06:49:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:12:42.207 06:49:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:42.465 true 00:12:42.465 06:49:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:42.465 06:49:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.396 06:49:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.653 06:49:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:12:43.653 06:49:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:43.910 true 00:12:43.910 06:49:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:43.910 06:49:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:44.168 06:49:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:44.425 06:49:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:12:44.425 06:49:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:44.682 true 00:12:44.682 06:49:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:44.682 06:49:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.615 06:49:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:45.873 06:49:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:12:45.873 06:49:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:46.129 true 00:12:46.129 06:50:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:46.129 06:50:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:46.386 06:50:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:46.644 06:50:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:12:46.644 06:50:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:46.644 true 00:12:46.901 06:50:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:46.901 06:50:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:47.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.834 06:50:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.834 06:50:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:12:47.834 06:50:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:48.092 true 00:12:48.092 06:50:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:48.092 06:50:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.349 06:50:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.606 06:50:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:12:48.606 06:50:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:48.863 true 00:12:48.863 06:50:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:48.863 06:50:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.794 06:50:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.051 06:50:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:12:50.051 06:50:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:50.308 true 00:12:50.308 06:50:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:50.308 06:50:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.565 06:50:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.822 06:50:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:12:50.822 06:50:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:51.080 true 00:12:51.080 06:50:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:51.080 06:50:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.012 06:50:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.270 06:50:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:12:52.270 06:50:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:52.270 true 00:12:52.270 06:50:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:52.270 06:50:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.528 06:50:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.786 06:50:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:12:52.786 06:50:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:53.043 true 00:12:53.043 06:50:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:53.043 06:50:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.975 06:50:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:54.234 06:50:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:12:54.234 06:50:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:54.491 true 00:12:54.491 06:50:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:54.491 06:50:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:54.749 06:50:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.006 06:50:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:12:55.006 06:50:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:55.274 true 00:12:55.274 06:50:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:55.274 06:50:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.232 06:50:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.232 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.490 06:50:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:12:56.490 06:50:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:56.748 true 00:12:56.748 06:50:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:56.748 06:50:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.006 06:50:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.263 06:50:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:12:57.263 06:50:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:57.263 true 00:12:57.520 06:50:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:57.520 06:50:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.452 06:50:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.709 06:50:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:12:58.709 06:50:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:58.709 true 00:12:58.966 06:50:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:58.966 06:50:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.966 06:50:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.222 06:50:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:12:59.222 06:50:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:59.479 true 00:12:59.479 06:50:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:12:59.479 06:50:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.411 06:50:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.411 Initializing NVMe Controllers 00:13:00.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:00.411 Controller IO queue size 128, less than required. 00:13:00.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:00.411 Controller IO queue size 128, less than required. 00:13:00.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:00.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:00.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:00.411 Initialization complete. Launching workers. 00:13:00.411 ======================================================== 00:13:00.411 Latency(us) 00:13:00.411 Device Information : IOPS MiB/s Average min max 00:13:00.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 841.46 0.41 84605.53 1752.77 1050409.89 00:13:00.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12125.98 5.92 10556.06 2442.85 368783.40 00:13:00.411 ======================================================== 00:13:00.411 Total : 12967.45 6.33 15361.15 1752.77 1050409.89 00:13:00.411 00:13:00.667 06:50:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:13:00.668 06:50:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:00.924 true 00:13:00.924 06:50:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 454632 00:13:00.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (454632) - No such process 00:13:00.924 06:50:14 -- target/ns_hotplug_stress.sh@44 -- # wait 454632 00:13:00.924 06:50:14 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:00.924 06:50:14 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:13:00.924 06:50:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:00.924 06:50:14 -- nvmf/common.sh@116 -- # sync 00:13:00.924 06:50:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:00.924 06:50:15 -- nvmf/common.sh@119 -- # set +e 00:13:00.924 06:50:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:00.924 06:50:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:00.924 rmmod nvme_tcp 00:13:00.924 rmmod nvme_fabrics 00:13:00.924 rmmod nvme_keyring 00:13:00.924 06:50:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:00.924 06:50:15 -- nvmf/common.sh@123 -- # set -e 00:13:00.924 06:50:15 -- nvmf/common.sh@124 -- # return 0 00:13:00.924 06:50:15 -- nvmf/common.sh@477 -- # '[' -n 454198 ']' 00:13:00.925 06:50:15 -- nvmf/common.sh@478 -- # killprocess 454198 00:13:00.925 06:50:15 -- common/autotest_common.sh@926 -- # '[' -z 454198 ']' 00:13:00.925 06:50:15 -- common/autotest_common.sh@930 -- # kill -0 454198 00:13:00.925 06:50:15 -- common/autotest_common.sh@931 -- # uname 00:13:00.925 06:50:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:00.925 06:50:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 454198 00:13:00.925 06:50:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:00.925 06:50:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:00.925 06:50:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 454198' 00:13:00.925 killing process with pid 454198 00:13:00.925 06:50:15 -- common/autotest_common.sh@945 -- # kill 454198 00:13:00.925 06:50:15 -- common/autotest_common.sh@950 -- # wait 454198 00:13:01.183 06:50:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:01.183 06:50:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:01.183 06:50:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:01.183 06:50:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:01.183 06:50:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:01.183 06:50:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.183 06:50:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.183 06:50:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.717 06:50:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:03.717 00:13:03.717 real 0m39.990s 00:13:03.717 user 2m32.507s 00:13:03.717 sys 0m10.622s 00:13:03.717 06:50:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.717 06:50:17 -- common/autotest_common.sh@10 -- # set +x 00:13:03.717 ************************************ 00:13:03.717 END TEST nvmf_ns_hotplug_stress 00:13:03.717 ************************************ 00:13:03.717 06:50:17 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:03.717 06:50:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:03.717 06:50:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:03.717 06:50:17 -- common/autotest_common.sh@10 -- # set +x 00:13:03.717 ************************************ 00:13:03.717 START TEST nvmf_connect_stress 00:13:03.717 ************************************ 00:13:03.717 06:50:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:03.717 * Looking for test storage... 00:13:03.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.717 06:50:17 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.717 06:50:17 -- nvmf/common.sh@7 -- # uname -s 00:13:03.717 06:50:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.717 06:50:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.717 06:50:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.717 06:50:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.717 06:50:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.717 06:50:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.717 06:50:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.717 06:50:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.717 06:50:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.717 06:50:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.717 06:50:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:03.717 06:50:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:03.717 06:50:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.717 06:50:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.717 06:50:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.717 06:50:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.717 06:50:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.717 06:50:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.717 06:50:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.717 06:50:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.717 06:50:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.717 06:50:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.717 06:50:17 -- paths/export.sh@5 -- # export PATH 00:13:03.717 06:50:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.717 06:50:17 -- nvmf/common.sh@46 -- # : 0 00:13:03.717 06:50:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:03.717 06:50:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:03.717 06:50:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:03.717 06:50:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.717 06:50:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.717 06:50:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:03.717 06:50:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:03.717 06:50:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:03.718 06:50:17 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:03.718 06:50:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:03.718 06:50:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.718 06:50:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:03.718 06:50:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:03.718 06:50:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:03.718 06:50:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.718 06:50:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.718 06:50:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.718 06:50:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:03.718 06:50:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:03.718 06:50:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:03.718 06:50:17 -- common/autotest_common.sh@10 -- # set +x 00:13:06.249 06:50:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:06.249 06:50:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:06.249 06:50:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:06.249 06:50:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:06.249 06:50:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:06.249 06:50:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:06.249 06:50:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:06.249 06:50:19 -- nvmf/common.sh@294 -- # net_devs=() 00:13:06.249 06:50:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:06.249 06:50:19 -- nvmf/common.sh@295 -- # e810=() 00:13:06.249 06:50:19 -- nvmf/common.sh@295 -- # local -ga e810 00:13:06.249 06:50:19 -- nvmf/common.sh@296 -- # x722=() 00:13:06.249 06:50:19 -- nvmf/common.sh@296 -- # local -ga x722 00:13:06.249 06:50:19 -- nvmf/common.sh@297 -- # mlx=() 00:13:06.249 06:50:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:06.249 06:50:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.249 06:50:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.249 06:50:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.249 06:50:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.249 06:50:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.249 06:50:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.249 06:50:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.249 06:50:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.249 06:50:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.249 06:50:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.249 06:50:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.249 06:50:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:06.250 06:50:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:06.250 06:50:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:06.250 06:50:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:06.250 06:50:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:06.250 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:06.250 06:50:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:06.250 06:50:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:06.250 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:06.250 06:50:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:06.250 06:50:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:06.250 06:50:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.250 06:50:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:06.250 06:50:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.250 06:50:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:06.250 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:06.250 06:50:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.250 06:50:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:06.250 06:50:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.250 06:50:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:06.250 06:50:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.250 06:50:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:06.250 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:06.250 06:50:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.250 06:50:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:06.250 06:50:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:06.250 06:50:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:06.250 06:50:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:06.250 06:50:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.250 06:50:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.250 06:50:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.250 06:50:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:06.250 06:50:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.250 06:50:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.250 06:50:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:06.250 06:50:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.250 06:50:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.250 06:50:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:06.250 06:50:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:06.250 06:50:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.250 06:50:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.250 06:50:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.250 06:50:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.250 06:50:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:06.250 06:50:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.250 06:50:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.250 06:50:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.250 06:50:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:06.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:13:06.250 00:13:06.250 --- 10.0.0.2 ping statistics --- 00:13:06.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.250 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:06.250 06:50:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:13:06.250 00:13:06.250 --- 10.0.0.1 ping statistics --- 00:13:06.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.250 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:13:06.250 06:50:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.250 06:50:20 -- nvmf/common.sh@410 -- # return 0 00:13:06.250 06:50:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:06.250 06:50:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.250 06:50:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:06.250 06:50:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:06.250 06:50:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.250 06:50:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:06.250 06:50:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:06.250 06:50:20 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:06.250 06:50:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:06.250 06:50:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:06.250 06:50:20 -- common/autotest_common.sh@10 -- # set +x 00:13:06.250 06:50:20 -- nvmf/common.sh@469 -- # nvmfpid=460760 00:13:06.250 06:50:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:06.250 06:50:20 -- nvmf/common.sh@470 -- # waitforlisten 460760 00:13:06.250 06:50:20 -- common/autotest_common.sh@819 -- # '[' -z 460760 ']' 00:13:06.250 06:50:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.250 06:50:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:06.250 06:50:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.250 06:50:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:06.250 06:50:20 -- common/autotest_common.sh@10 -- # set +x 00:13:06.250 [2024-05-15 06:50:20.172094] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:06.250 [2024-05-15 06:50:20.172167] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.250 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.250 [2024-05-15 06:50:20.256487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.250 [2024-05-15 06:50:20.376029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:06.250 [2024-05-15 06:50:20.376220] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.250 [2024-05-15 06:50:20.376237] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.250 [2024-05-15 06:50:20.376266] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.250 [2024-05-15 06:50:20.376384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.250 [2024-05-15 06:50:20.376541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.250 [2024-05-15 06:50:20.376544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.183 06:50:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:07.183 06:50:21 -- common/autotest_common.sh@852 -- # return 0 00:13:07.183 06:50:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:07.183 06:50:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:07.183 06:50:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.183 06:50:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.183 06:50:21 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:07.183 06:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.183 06:50:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.183 [2024-05-15 06:50:21.147674] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.183 06:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.183 06:50:21 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:07.183 06:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.183 06:50:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.183 06:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.183 06:50:21 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.183 06:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.183 06:50:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.183 [2024-05-15 06:50:21.178081] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.183 06:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.183 06:50:21 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:07.183 06:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.183 06:50:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.183 NULL1 00:13:07.183 06:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.183 06:50:21 -- target/connect_stress.sh@21 -- # PERF_PID=460919 00:13:07.183 06:50:21 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:07.183 06:50:21 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:07.183 06:50:21 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:07.183 06:50:21 -- target/connect_stress.sh@28 -- # cat 00:13:07.183 06:50:21 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:07.183 06:50:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.183 06:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.183 06:50:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.440 06:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.440 06:50:21 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:07.440 06:50:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.440 06:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.440 06:50:21 -- common/autotest_common.sh@10 -- # set +x 00:13:07.698 06:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.698 06:50:21 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:07.698 06:50:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.698 06:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.698 06:50:21 -- common/autotest_common.sh@10 -- # set +x 00:13:08.263 06:50:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.263 06:50:22 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:08.263 06:50:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.263 06:50:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.263 06:50:22 -- common/autotest_common.sh@10 -- # set +x 00:13:08.521 06:50:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.521 06:50:22 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:08.521 06:50:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.521 06:50:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.521 06:50:22 -- common/autotest_common.sh@10 -- # set +x 00:13:08.779 06:50:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.779 06:50:22 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:08.779 06:50:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.779 06:50:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.779 06:50:22 -- common/autotest_common.sh@10 -- # set +x 00:13:09.037 06:50:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.037 06:50:23 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:09.037 06:50:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.037 06:50:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.037 06:50:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.293 06:50:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.293 06:50:23 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:09.293 06:50:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.293 06:50:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.293 06:50:23 -- common/autotest_common.sh@10 -- # set +x 00:13:09.858 06:50:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:09.858 06:50:23 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:09.858 06:50:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.858 06:50:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:09.858 06:50:23 -- common/autotest_common.sh@10 -- # set +x 00:13:10.115 06:50:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.115 06:50:24 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:10.115 06:50:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.115 06:50:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.115 06:50:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.372 06:50:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.372 06:50:24 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:10.372 06:50:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.372 06:50:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.372 06:50:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.630 06:50:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.630 06:50:24 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:10.630 06:50:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.630 06:50:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.630 06:50:24 -- common/autotest_common.sh@10 -- # set +x 00:13:10.888 06:50:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.888 06:50:25 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:10.888 06:50:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.888 06:50:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.888 06:50:25 -- common/autotest_common.sh@10 -- # set +x 00:13:11.454 06:50:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.454 06:50:25 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:11.454 06:50:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.454 06:50:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.454 06:50:25 -- common/autotest_common.sh@10 -- # set +x 00:13:11.712 06:50:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.712 06:50:25 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:11.712 06:50:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.712 06:50:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.712 06:50:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.008 06:50:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.008 06:50:26 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:12.008 06:50:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.008 06:50:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.008 06:50:26 -- common/autotest_common.sh@10 -- # set +x 00:13:12.266 06:50:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.266 06:50:26 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:12.266 06:50:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.266 06:50:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.266 06:50:26 -- common/autotest_common.sh@10 -- # set +x 00:13:12.524 06:50:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.524 06:50:26 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:12.524 06:50:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.524 06:50:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.524 06:50:26 -- common/autotest_common.sh@10 -- # set +x 00:13:13.089 06:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.089 06:50:27 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:13.089 06:50:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.089 06:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.089 06:50:27 -- common/autotest_common.sh@10 -- # set +x 00:13:13.347 06:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.347 06:50:27 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:13.347 06:50:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.347 06:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.347 06:50:27 -- common/autotest_common.sh@10 -- # set +x 00:13:13.606 06:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.606 06:50:27 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:13.606 06:50:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.606 06:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.606 06:50:27 -- common/autotest_common.sh@10 -- # set +x 00:13:13.864 06:50:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.864 06:50:27 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:13.864 06:50:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:13.864 06:50:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.864 06:50:27 -- common/autotest_common.sh@10 -- # set +x 00:13:14.121 06:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.121 06:50:28 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:14.121 06:50:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.121 06:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.121 06:50:28 -- common/autotest_common.sh@10 -- # set +x 00:13:14.686 06:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.686 06:50:28 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:14.686 06:50:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.686 06:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.686 06:50:28 -- common/autotest_common.sh@10 -- # set +x 00:13:14.943 06:50:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.943 06:50:28 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:14.943 06:50:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:14.943 06:50:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.943 06:50:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.200 06:50:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.200 06:50:29 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:15.200 06:50:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.200 06:50:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.200 06:50:29 -- common/autotest_common.sh@10 -- # set +x 00:13:15.457 06:50:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.457 06:50:29 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:15.457 06:50:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.457 06:50:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.457 06:50:29 -- common/autotest_common.sh@10 -- # set +x 00:13:15.714 06:50:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.714 06:50:29 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:15.714 06:50:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:15.714 06:50:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.714 06:50:29 -- common/autotest_common.sh@10 -- # set +x 00:13:16.280 06:50:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.280 06:50:30 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:16.280 06:50:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.280 06:50:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.280 06:50:30 -- common/autotest_common.sh@10 -- # set +x 00:13:16.537 06:50:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.537 06:50:30 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:16.537 06:50:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.537 06:50:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.537 06:50:30 -- common/autotest_common.sh@10 -- # set +x 00:13:16.795 06:50:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.795 06:50:30 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:16.795 06:50:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:16.795 06:50:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.795 06:50:30 -- common/autotest_common.sh@10 -- # set +x 00:13:17.052 06:50:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.052 06:50:31 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:17.052 06:50:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.052 06:50:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.052 06:50:31 -- common/autotest_common.sh@10 -- # set +x 00:13:17.310 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:17.310 06:50:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.310 06:50:31 -- target/connect_stress.sh@34 -- # kill -0 460919 00:13:17.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (460919) - No such process 00:13:17.310 06:50:31 -- target/connect_stress.sh@38 -- # wait 460919 00:13:17.310 06:50:31 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.310 06:50:31 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:17.310 06:50:31 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:17.310 06:50:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:17.310 06:50:31 -- nvmf/common.sh@116 -- # sync 00:13:17.310 06:50:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:17.310 06:50:31 -- nvmf/common.sh@119 -- # set +e 00:13:17.310 06:50:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:17.310 06:50:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:17.310 rmmod nvme_tcp 00:13:17.568 rmmod nvme_fabrics 00:13:17.568 rmmod nvme_keyring 00:13:17.568 06:50:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:17.568 06:50:31 -- nvmf/common.sh@123 -- # set -e 00:13:17.568 06:50:31 -- nvmf/common.sh@124 -- # return 0 00:13:17.568 06:50:31 -- nvmf/common.sh@477 -- # '[' -n 460760 ']' 00:13:17.568 06:50:31 -- nvmf/common.sh@478 -- # killprocess 460760 00:13:17.568 06:50:31 -- common/autotest_common.sh@926 -- # '[' -z 460760 ']' 00:13:17.568 06:50:31 -- common/autotest_common.sh@930 -- # kill -0 460760 00:13:17.568 06:50:31 -- common/autotest_common.sh@931 -- # uname 00:13:17.568 06:50:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:17.568 06:50:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 460760 00:13:17.568 06:50:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:17.568 06:50:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:17.568 06:50:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 460760' 00:13:17.568 killing process with pid 460760 00:13:17.568 06:50:31 -- common/autotest_common.sh@945 -- # kill 460760 00:13:17.568 06:50:31 -- common/autotest_common.sh@950 -- # wait 460760 00:13:17.828 06:50:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:17.828 06:50:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:17.828 06:50:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:17.828 06:50:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.828 06:50:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:17.828 06:50:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.828 06:50:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.828 06:50:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.736 06:50:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:19.736 00:13:19.736 real 0m16.504s 00:13:19.736 user 0m40.184s 00:13:19.736 sys 0m6.515s 00:13:19.736 06:50:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.736 06:50:33 -- common/autotest_common.sh@10 -- # set +x 00:13:19.736 ************************************ 00:13:19.736 END TEST nvmf_connect_stress 00:13:19.736 ************************************ 00:13:19.736 06:50:33 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:19.736 06:50:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:19.736 06:50:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:19.736 06:50:33 -- common/autotest_common.sh@10 -- # set +x 00:13:19.736 ************************************ 00:13:19.736 START TEST nvmf_fused_ordering 00:13:19.736 ************************************ 00:13:19.736 06:50:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:19.996 * Looking for test storage... 00:13:19.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:19.996 06:50:33 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.996 06:50:33 -- nvmf/common.sh@7 -- # uname -s 00:13:19.996 06:50:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.996 06:50:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.996 06:50:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.996 06:50:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.996 06:50:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.996 06:50:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.996 06:50:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.996 06:50:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.996 06:50:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.996 06:50:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.996 06:50:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.996 06:50:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:19.996 06:50:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.996 06:50:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.996 06:50:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.996 06:50:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:19.996 06:50:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.996 06:50:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.996 06:50:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.996 06:50:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.996 06:50:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.996 06:50:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.996 06:50:34 -- paths/export.sh@5 -- # export PATH 00:13:19.996 06:50:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.996 06:50:34 -- nvmf/common.sh@46 -- # : 0 00:13:19.996 06:50:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:19.996 06:50:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:19.996 06:50:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:19.996 06:50:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.996 06:50:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.996 06:50:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:19.996 06:50:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:19.996 06:50:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:19.996 06:50:34 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:19.996 06:50:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:19.996 06:50:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.996 06:50:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:19.996 06:50:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:19.996 06:50:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:19.996 06:50:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.996 06:50:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.996 06:50:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.996 06:50:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:19.996 06:50:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:19.996 06:50:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:19.996 06:50:34 -- common/autotest_common.sh@10 -- # set +x 00:13:22.529 06:50:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:22.529 06:50:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:22.529 06:50:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:22.529 06:50:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:22.529 06:50:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:22.529 06:50:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:22.529 06:50:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:22.529 06:50:36 -- nvmf/common.sh@294 -- # net_devs=() 00:13:22.529 06:50:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:22.529 06:50:36 -- nvmf/common.sh@295 -- # e810=() 00:13:22.529 06:50:36 -- nvmf/common.sh@295 -- # local -ga e810 00:13:22.529 06:50:36 -- nvmf/common.sh@296 -- # x722=() 00:13:22.529 06:50:36 -- nvmf/common.sh@296 -- # local -ga x722 00:13:22.529 06:50:36 -- nvmf/common.sh@297 -- # mlx=() 00:13:22.529 06:50:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:22.529 06:50:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.529 06:50:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.529 06:50:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.529 06:50:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.529 06:50:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.529 06:50:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.529 06:50:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.529 06:50:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.529 06:50:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.529 06:50:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.529 06:50:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.529 06:50:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:22.529 06:50:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:22.529 06:50:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:22.529 06:50:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:22.529 06:50:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:22.529 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:22.529 06:50:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:22.529 06:50:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:22.529 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:22.529 06:50:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:22.529 06:50:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:22.529 06:50:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.529 06:50:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:22.529 06:50:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.529 06:50:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:22.529 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:22.529 06:50:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.529 06:50:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:22.529 06:50:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.529 06:50:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:22.529 06:50:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.529 06:50:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:22.529 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:22.529 06:50:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.529 06:50:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:22.529 06:50:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:22.529 06:50:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:22.529 06:50:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:22.529 06:50:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.529 06:50:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.529 06:50:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.529 06:50:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:22.529 06:50:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.529 06:50:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.529 06:50:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:22.529 06:50:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.529 06:50:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.529 06:50:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:22.529 06:50:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:22.529 06:50:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.529 06:50:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.529 06:50:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.529 06:50:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.529 06:50:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:22.529 06:50:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.529 06:50:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.529 06:50:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.529 06:50:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:22.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:13:22.529 00:13:22.529 --- 10.0.0.2 ping statistics --- 00:13:22.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.529 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:13:22.529 06:50:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:13:22.530 00:13:22.530 --- 10.0.0.1 ping statistics --- 00:13:22.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.530 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:13:22.530 06:50:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.530 06:50:36 -- nvmf/common.sh@410 -- # return 0 00:13:22.530 06:50:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:22.530 06:50:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.530 06:50:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:22.530 06:50:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:22.530 06:50:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.530 06:50:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:22.530 06:50:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:22.530 06:50:36 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:22.530 06:50:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:22.530 06:50:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:22.530 06:50:36 -- common/autotest_common.sh@10 -- # set +x 00:13:22.530 06:50:36 -- nvmf/common.sh@469 -- # nvmfpid=464538 00:13:22.530 06:50:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:22.530 06:50:36 -- nvmf/common.sh@470 -- # waitforlisten 464538 00:13:22.530 06:50:36 -- common/autotest_common.sh@819 -- # '[' -z 464538 ']' 00:13:22.530 06:50:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.530 06:50:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:22.530 06:50:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.530 06:50:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:22.530 06:50:36 -- common/autotest_common.sh@10 -- # set +x 00:13:22.530 [2024-05-15 06:50:36.605833] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:22.530 [2024-05-15 06:50:36.605902] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.530 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.530 [2024-05-15 06:50:36.698442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.788 [2024-05-15 06:50:36.830203] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.788 [2024-05-15 06:50:36.830379] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.788 [2024-05-15 06:50:36.830408] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.788 [2024-05-15 06:50:36.830433] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.788 [2024-05-15 06:50:36.830473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.722 06:50:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:23.722 06:50:37 -- common/autotest_common.sh@852 -- # return 0 00:13:23.722 06:50:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:23.722 06:50:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:23.722 06:50:37 -- common/autotest_common.sh@10 -- # set +x 00:13:23.722 06:50:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.722 06:50:37 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.722 06:50:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.722 06:50:37 -- common/autotest_common.sh@10 -- # set +x 00:13:23.722 [2024-05-15 06:50:37.720323] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.722 06:50:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.722 06:50:37 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:23.722 06:50:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.722 06:50:37 -- common/autotest_common.sh@10 -- # set +x 00:13:23.722 06:50:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.722 06:50:37 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.722 06:50:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.722 06:50:37 -- common/autotest_common.sh@10 -- # set +x 00:13:23.722 [2024-05-15 06:50:37.736506] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.722 06:50:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.722 06:50:37 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:23.722 06:50:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.722 06:50:37 -- common/autotest_common.sh@10 -- # set +x 00:13:23.722 NULL1 00:13:23.722 06:50:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.722 06:50:37 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:23.722 06:50:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.722 06:50:37 -- common/autotest_common.sh@10 -- # set +x 00:13:23.722 06:50:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.722 06:50:37 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:23.722 06:50:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.722 06:50:37 -- common/autotest_common.sh@10 -- # set +x 00:13:23.722 06:50:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.722 06:50:37 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:23.722 [2024-05-15 06:50:37.782114] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:23.722 [2024-05-15 06:50:37.782154] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464695 ] 00:13:23.722 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.655 Attached to nqn.2016-06.io.spdk:cnode1 00:13:24.655 Namespace ID: 1 size: 1GB 00:13:24.655 fused_ordering(0) 00:13:24.655 fused_ordering(1) 00:13:24.655 fused_ordering(2) 00:13:24.655 fused_ordering(3) 00:13:24.655 fused_ordering(4) 00:13:24.655 fused_ordering(5) 00:13:24.655 fused_ordering(6) 00:13:24.655 fused_ordering(7) 00:13:24.655 fused_ordering(8) 00:13:24.655 fused_ordering(9) 00:13:24.655 fused_ordering(10) 00:13:24.655 fused_ordering(11) 00:13:24.655 fused_ordering(12) 00:13:24.655 fused_ordering(13) 00:13:24.655 fused_ordering(14) 00:13:24.655 fused_ordering(15) 00:13:24.655 fused_ordering(16) 00:13:24.655 fused_ordering(17) 00:13:24.655 fused_ordering(18) 00:13:24.655 fused_ordering(19) 00:13:24.655 fused_ordering(20) 00:13:24.655 fused_ordering(21) 00:13:24.655 fused_ordering(22) 00:13:24.655 fused_ordering(23) 00:13:24.655 fused_ordering(24) 00:13:24.655 fused_ordering(25) 00:13:24.655 fused_ordering(26) 00:13:24.655 fused_ordering(27) 00:13:24.655 fused_ordering(28) 00:13:24.655 fused_ordering(29) 00:13:24.655 fused_ordering(30) 00:13:24.655 fused_ordering(31) 00:13:24.655 fused_ordering(32) 00:13:24.655 fused_ordering(33) 00:13:24.655 fused_ordering(34) 00:13:24.655 fused_ordering(35) 00:13:24.655 fused_ordering(36) 00:13:24.655 fused_ordering(37) 00:13:24.655 fused_ordering(38) 00:13:24.655 fused_ordering(39) 00:13:24.655 fused_ordering(40) 00:13:24.655 fused_ordering(41) 00:13:24.655 fused_ordering(42) 00:13:24.655 fused_ordering(43) 00:13:24.655 fused_ordering(44) 00:13:24.655 fused_ordering(45) 00:13:24.655 fused_ordering(46) 00:13:24.655 fused_ordering(47) 00:13:24.655 fused_ordering(48) 00:13:24.655 fused_ordering(49) 00:13:24.655 fused_ordering(50) 00:13:24.655 fused_ordering(51) 00:13:24.655 fused_ordering(52) 00:13:24.655 fused_ordering(53) 00:13:24.655 fused_ordering(54) 00:13:24.655 fused_ordering(55) 00:13:24.655 fused_ordering(56) 00:13:24.655 fused_ordering(57) 00:13:24.655 fused_ordering(58) 00:13:24.655 fused_ordering(59) 00:13:24.655 fused_ordering(60) 00:13:24.655 fused_ordering(61) 00:13:24.655 fused_ordering(62) 00:13:24.655 fused_ordering(63) 00:13:24.655 fused_ordering(64) 00:13:24.655 fused_ordering(65) 00:13:24.655 fused_ordering(66) 00:13:24.655 fused_ordering(67) 00:13:24.655 fused_ordering(68) 00:13:24.655 fused_ordering(69) 00:13:24.655 fused_ordering(70) 00:13:24.655 fused_ordering(71) 00:13:24.655 fused_ordering(72) 00:13:24.655 fused_ordering(73) 00:13:24.655 fused_ordering(74) 00:13:24.655 fused_ordering(75) 00:13:24.655 fused_ordering(76) 00:13:24.655 fused_ordering(77) 00:13:24.655 fused_ordering(78) 00:13:24.655 fused_ordering(79) 00:13:24.655 fused_ordering(80) 00:13:24.655 fused_ordering(81) 00:13:24.655 fused_ordering(82) 00:13:24.655 fused_ordering(83) 00:13:24.655 fused_ordering(84) 00:13:24.655 fused_ordering(85) 00:13:24.655 fused_ordering(86) 00:13:24.655 fused_ordering(87) 00:13:24.655 fused_ordering(88) 00:13:24.655 fused_ordering(89) 00:13:24.655 fused_ordering(90) 00:13:24.655 fused_ordering(91) 00:13:24.655 fused_ordering(92) 00:13:24.655 fused_ordering(93) 00:13:24.655 fused_ordering(94) 00:13:24.655 fused_ordering(95) 00:13:24.655 fused_ordering(96) 00:13:24.655 fused_ordering(97) 00:13:24.655 fused_ordering(98) 00:13:24.655 fused_ordering(99) 00:13:24.655 fused_ordering(100) 00:13:24.655 fused_ordering(101) 00:13:24.655 fused_ordering(102) 00:13:24.655 fused_ordering(103) 00:13:24.655 fused_ordering(104) 00:13:24.655 fused_ordering(105) 00:13:24.655 fused_ordering(106) 00:13:24.655 fused_ordering(107) 00:13:24.655 fused_ordering(108) 00:13:24.655 fused_ordering(109) 00:13:24.655 fused_ordering(110) 00:13:24.655 fused_ordering(111) 00:13:24.655 fused_ordering(112) 00:13:24.655 fused_ordering(113) 00:13:24.655 fused_ordering(114) 00:13:24.655 fused_ordering(115) 00:13:24.655 fused_ordering(116) 00:13:24.655 fused_ordering(117) 00:13:24.655 fused_ordering(118) 00:13:24.655 fused_ordering(119) 00:13:24.655 fused_ordering(120) 00:13:24.655 fused_ordering(121) 00:13:24.655 fused_ordering(122) 00:13:24.655 fused_ordering(123) 00:13:24.655 fused_ordering(124) 00:13:24.655 fused_ordering(125) 00:13:24.655 fused_ordering(126) 00:13:24.655 fused_ordering(127) 00:13:24.655 fused_ordering(128) 00:13:24.655 fused_ordering(129) 00:13:24.655 fused_ordering(130) 00:13:24.655 fused_ordering(131) 00:13:24.655 fused_ordering(132) 00:13:24.655 fused_ordering(133) 00:13:24.655 fused_ordering(134) 00:13:24.655 fused_ordering(135) 00:13:24.655 fused_ordering(136) 00:13:24.655 fused_ordering(137) 00:13:24.655 fused_ordering(138) 00:13:24.655 fused_ordering(139) 00:13:24.655 fused_ordering(140) 00:13:24.655 fused_ordering(141) 00:13:24.655 fused_ordering(142) 00:13:24.655 fused_ordering(143) 00:13:24.655 fused_ordering(144) 00:13:24.655 fused_ordering(145) 00:13:24.655 fused_ordering(146) 00:13:24.655 fused_ordering(147) 00:13:24.655 fused_ordering(148) 00:13:24.655 fused_ordering(149) 00:13:24.655 fused_ordering(150) 00:13:24.655 fused_ordering(151) 00:13:24.655 fused_ordering(152) 00:13:24.655 fused_ordering(153) 00:13:24.655 fused_ordering(154) 00:13:24.655 fused_ordering(155) 00:13:24.655 fused_ordering(156) 00:13:24.655 fused_ordering(157) 00:13:24.655 fused_ordering(158) 00:13:24.655 fused_ordering(159) 00:13:24.655 fused_ordering(160) 00:13:24.655 fused_ordering(161) 00:13:24.655 fused_ordering(162) 00:13:24.655 fused_ordering(163) 00:13:24.655 fused_ordering(164) 00:13:24.655 fused_ordering(165) 00:13:24.655 fused_ordering(166) 00:13:24.655 fused_ordering(167) 00:13:24.655 fused_ordering(168) 00:13:24.655 fused_ordering(169) 00:13:24.655 fused_ordering(170) 00:13:24.655 fused_ordering(171) 00:13:24.655 fused_ordering(172) 00:13:24.655 fused_ordering(173) 00:13:24.655 fused_ordering(174) 00:13:24.655 fused_ordering(175) 00:13:24.655 fused_ordering(176) 00:13:24.655 fused_ordering(177) 00:13:24.655 fused_ordering(178) 00:13:24.655 fused_ordering(179) 00:13:24.655 fused_ordering(180) 00:13:24.655 fused_ordering(181) 00:13:24.655 fused_ordering(182) 00:13:24.655 fused_ordering(183) 00:13:24.655 fused_ordering(184) 00:13:24.655 fused_ordering(185) 00:13:24.655 fused_ordering(186) 00:13:24.655 fused_ordering(187) 00:13:24.655 fused_ordering(188) 00:13:24.655 fused_ordering(189) 00:13:24.655 fused_ordering(190) 00:13:24.655 fused_ordering(191) 00:13:24.655 fused_ordering(192) 00:13:24.655 fused_ordering(193) 00:13:24.655 fused_ordering(194) 00:13:24.655 fused_ordering(195) 00:13:24.655 fused_ordering(196) 00:13:24.655 fused_ordering(197) 00:13:24.655 fused_ordering(198) 00:13:24.655 fused_ordering(199) 00:13:24.655 fused_ordering(200) 00:13:24.655 fused_ordering(201) 00:13:24.655 fused_ordering(202) 00:13:24.655 fused_ordering(203) 00:13:24.655 fused_ordering(204) 00:13:24.655 fused_ordering(205) 00:13:25.221 fused_ordering(206) 00:13:25.221 fused_ordering(207) 00:13:25.221 fused_ordering(208) 00:13:25.221 fused_ordering(209) 00:13:25.221 fused_ordering(210) 00:13:25.221 fused_ordering(211) 00:13:25.221 fused_ordering(212) 00:13:25.221 fused_ordering(213) 00:13:25.221 fused_ordering(214) 00:13:25.221 fused_ordering(215) 00:13:25.221 fused_ordering(216) 00:13:25.221 fused_ordering(217) 00:13:25.221 fused_ordering(218) 00:13:25.221 fused_ordering(219) 00:13:25.221 fused_ordering(220) 00:13:25.221 fused_ordering(221) 00:13:25.221 fused_ordering(222) 00:13:25.221 fused_ordering(223) 00:13:25.221 fused_ordering(224) 00:13:25.221 fused_ordering(225) 00:13:25.221 fused_ordering(226) 00:13:25.221 fused_ordering(227) 00:13:25.221 fused_ordering(228) 00:13:25.221 fused_ordering(229) 00:13:25.221 fused_ordering(230) 00:13:25.221 fused_ordering(231) 00:13:25.221 fused_ordering(232) 00:13:25.221 fused_ordering(233) 00:13:25.221 fused_ordering(234) 00:13:25.221 fused_ordering(235) 00:13:25.221 fused_ordering(236) 00:13:25.221 fused_ordering(237) 00:13:25.221 fused_ordering(238) 00:13:25.221 fused_ordering(239) 00:13:25.221 fused_ordering(240) 00:13:25.221 fused_ordering(241) 00:13:25.221 fused_ordering(242) 00:13:25.221 fused_ordering(243) 00:13:25.221 fused_ordering(244) 00:13:25.221 fused_ordering(245) 00:13:25.221 fused_ordering(246) 00:13:25.221 fused_ordering(247) 00:13:25.221 fused_ordering(248) 00:13:25.221 fused_ordering(249) 00:13:25.221 fused_ordering(250) 00:13:25.221 fused_ordering(251) 00:13:25.221 fused_ordering(252) 00:13:25.221 fused_ordering(253) 00:13:25.221 fused_ordering(254) 00:13:25.221 fused_ordering(255) 00:13:25.221 fused_ordering(256) 00:13:25.221 fused_ordering(257) 00:13:25.221 fused_ordering(258) 00:13:25.221 fused_ordering(259) 00:13:25.221 fused_ordering(260) 00:13:25.221 fused_ordering(261) 00:13:25.221 fused_ordering(262) 00:13:25.221 fused_ordering(263) 00:13:25.221 fused_ordering(264) 00:13:25.221 fused_ordering(265) 00:13:25.221 fused_ordering(266) 00:13:25.221 fused_ordering(267) 00:13:25.221 fused_ordering(268) 00:13:25.221 fused_ordering(269) 00:13:25.221 fused_ordering(270) 00:13:25.221 fused_ordering(271) 00:13:25.221 fused_ordering(272) 00:13:25.221 fused_ordering(273) 00:13:25.221 fused_ordering(274) 00:13:25.221 fused_ordering(275) 00:13:25.221 fused_ordering(276) 00:13:25.221 fused_ordering(277) 00:13:25.221 fused_ordering(278) 00:13:25.221 fused_ordering(279) 00:13:25.221 fused_ordering(280) 00:13:25.221 fused_ordering(281) 00:13:25.221 fused_ordering(282) 00:13:25.221 fused_ordering(283) 00:13:25.221 fused_ordering(284) 00:13:25.221 fused_ordering(285) 00:13:25.221 fused_ordering(286) 00:13:25.221 fused_ordering(287) 00:13:25.221 fused_ordering(288) 00:13:25.221 fused_ordering(289) 00:13:25.221 fused_ordering(290) 00:13:25.221 fused_ordering(291) 00:13:25.221 fused_ordering(292) 00:13:25.221 fused_ordering(293) 00:13:25.221 fused_ordering(294) 00:13:25.221 fused_ordering(295) 00:13:25.221 fused_ordering(296) 00:13:25.221 fused_ordering(297) 00:13:25.221 fused_ordering(298) 00:13:25.221 fused_ordering(299) 00:13:25.221 fused_ordering(300) 00:13:25.221 fused_ordering(301) 00:13:25.221 fused_ordering(302) 00:13:25.221 fused_ordering(303) 00:13:25.221 fused_ordering(304) 00:13:25.221 fused_ordering(305) 00:13:25.221 fused_ordering(306) 00:13:25.221 fused_ordering(307) 00:13:25.221 fused_ordering(308) 00:13:25.221 fused_ordering(309) 00:13:25.221 fused_ordering(310) 00:13:25.221 fused_ordering(311) 00:13:25.221 fused_ordering(312) 00:13:25.221 fused_ordering(313) 00:13:25.221 fused_ordering(314) 00:13:25.221 fused_ordering(315) 00:13:25.221 fused_ordering(316) 00:13:25.221 fused_ordering(317) 00:13:25.221 fused_ordering(318) 00:13:25.221 fused_ordering(319) 00:13:25.221 fused_ordering(320) 00:13:25.221 fused_ordering(321) 00:13:25.221 fused_ordering(322) 00:13:25.221 fused_ordering(323) 00:13:25.221 fused_ordering(324) 00:13:25.221 fused_ordering(325) 00:13:25.221 fused_ordering(326) 00:13:25.221 fused_ordering(327) 00:13:25.221 fused_ordering(328) 00:13:25.221 fused_ordering(329) 00:13:25.221 fused_ordering(330) 00:13:25.221 fused_ordering(331) 00:13:25.221 fused_ordering(332) 00:13:25.221 fused_ordering(333) 00:13:25.221 fused_ordering(334) 00:13:25.221 fused_ordering(335) 00:13:25.221 fused_ordering(336) 00:13:25.221 fused_ordering(337) 00:13:25.221 fused_ordering(338) 00:13:25.221 fused_ordering(339) 00:13:25.221 fused_ordering(340) 00:13:25.221 fused_ordering(341) 00:13:25.221 fused_ordering(342) 00:13:25.221 fused_ordering(343) 00:13:25.221 fused_ordering(344) 00:13:25.221 fused_ordering(345) 00:13:25.221 fused_ordering(346) 00:13:25.221 fused_ordering(347) 00:13:25.221 fused_ordering(348) 00:13:25.221 fused_ordering(349) 00:13:25.221 fused_ordering(350) 00:13:25.221 fused_ordering(351) 00:13:25.221 fused_ordering(352) 00:13:25.221 fused_ordering(353) 00:13:25.221 fused_ordering(354) 00:13:25.221 fused_ordering(355) 00:13:25.221 fused_ordering(356) 00:13:25.221 fused_ordering(357) 00:13:25.221 fused_ordering(358) 00:13:25.221 fused_ordering(359) 00:13:25.221 fused_ordering(360) 00:13:25.221 fused_ordering(361) 00:13:25.221 fused_ordering(362) 00:13:25.221 fused_ordering(363) 00:13:25.221 fused_ordering(364) 00:13:25.221 fused_ordering(365) 00:13:25.221 fused_ordering(366) 00:13:25.221 fused_ordering(367) 00:13:25.221 fused_ordering(368) 00:13:25.221 fused_ordering(369) 00:13:25.221 fused_ordering(370) 00:13:25.221 fused_ordering(371) 00:13:25.221 fused_ordering(372) 00:13:25.221 fused_ordering(373) 00:13:25.221 fused_ordering(374) 00:13:25.221 fused_ordering(375) 00:13:25.221 fused_ordering(376) 00:13:25.221 fused_ordering(377) 00:13:25.221 fused_ordering(378) 00:13:25.221 fused_ordering(379) 00:13:25.221 fused_ordering(380) 00:13:25.221 fused_ordering(381) 00:13:25.221 fused_ordering(382) 00:13:25.221 fused_ordering(383) 00:13:25.221 fused_ordering(384) 00:13:25.221 fused_ordering(385) 00:13:25.221 fused_ordering(386) 00:13:25.222 fused_ordering(387) 00:13:25.222 fused_ordering(388) 00:13:25.222 fused_ordering(389) 00:13:25.222 fused_ordering(390) 00:13:25.222 fused_ordering(391) 00:13:25.222 fused_ordering(392) 00:13:25.222 fused_ordering(393) 00:13:25.222 fused_ordering(394) 00:13:25.222 fused_ordering(395) 00:13:25.222 fused_ordering(396) 00:13:25.222 fused_ordering(397) 00:13:25.222 fused_ordering(398) 00:13:25.222 fused_ordering(399) 00:13:25.222 fused_ordering(400) 00:13:25.222 fused_ordering(401) 00:13:25.222 fused_ordering(402) 00:13:25.222 fused_ordering(403) 00:13:25.222 fused_ordering(404) 00:13:25.222 fused_ordering(405) 00:13:25.222 fused_ordering(406) 00:13:25.222 fused_ordering(407) 00:13:25.222 fused_ordering(408) 00:13:25.222 fused_ordering(409) 00:13:25.222 fused_ordering(410) 00:13:26.155 fused_ordering(411) 00:13:26.155 fused_ordering(412) 00:13:26.155 fused_ordering(413) 00:13:26.155 fused_ordering(414) 00:13:26.155 fused_ordering(415) 00:13:26.155 fused_ordering(416) 00:13:26.155 fused_ordering(417) 00:13:26.155 fused_ordering(418) 00:13:26.155 fused_ordering(419) 00:13:26.155 fused_ordering(420) 00:13:26.155 fused_ordering(421) 00:13:26.155 fused_ordering(422) 00:13:26.155 fused_ordering(423) 00:13:26.155 fused_ordering(424) 00:13:26.155 fused_ordering(425) 00:13:26.155 fused_ordering(426) 00:13:26.155 fused_ordering(427) 00:13:26.155 fused_ordering(428) 00:13:26.155 fused_ordering(429) 00:13:26.155 fused_ordering(430) 00:13:26.155 fused_ordering(431) 00:13:26.155 fused_ordering(432) 00:13:26.155 fused_ordering(433) 00:13:26.155 fused_ordering(434) 00:13:26.155 fused_ordering(435) 00:13:26.155 fused_ordering(436) 00:13:26.155 fused_ordering(437) 00:13:26.155 fused_ordering(438) 00:13:26.155 fused_ordering(439) 00:13:26.155 fused_ordering(440) 00:13:26.155 fused_ordering(441) 00:13:26.155 fused_ordering(442) 00:13:26.155 fused_ordering(443) 00:13:26.155 fused_ordering(444) 00:13:26.155 fused_ordering(445) 00:13:26.155 fused_ordering(446) 00:13:26.155 fused_ordering(447) 00:13:26.155 fused_ordering(448) 00:13:26.155 fused_ordering(449) 00:13:26.155 fused_ordering(450) 00:13:26.155 fused_ordering(451) 00:13:26.155 fused_ordering(452) 00:13:26.155 fused_ordering(453) 00:13:26.155 fused_ordering(454) 00:13:26.155 fused_ordering(455) 00:13:26.155 fused_ordering(456) 00:13:26.155 fused_ordering(457) 00:13:26.155 fused_ordering(458) 00:13:26.155 fused_ordering(459) 00:13:26.155 fused_ordering(460) 00:13:26.155 fused_ordering(461) 00:13:26.155 fused_ordering(462) 00:13:26.155 fused_ordering(463) 00:13:26.155 fused_ordering(464) 00:13:26.155 fused_ordering(465) 00:13:26.155 fused_ordering(466) 00:13:26.155 fused_ordering(467) 00:13:26.155 fused_ordering(468) 00:13:26.155 fused_ordering(469) 00:13:26.155 fused_ordering(470) 00:13:26.155 fused_ordering(471) 00:13:26.155 fused_ordering(472) 00:13:26.155 fused_ordering(473) 00:13:26.155 fused_ordering(474) 00:13:26.155 fused_ordering(475) 00:13:26.155 fused_ordering(476) 00:13:26.155 fused_ordering(477) 00:13:26.155 fused_ordering(478) 00:13:26.155 fused_ordering(479) 00:13:26.155 fused_ordering(480) 00:13:26.155 fused_ordering(481) 00:13:26.155 fused_ordering(482) 00:13:26.155 fused_ordering(483) 00:13:26.155 fused_ordering(484) 00:13:26.155 fused_ordering(485) 00:13:26.155 fused_ordering(486) 00:13:26.155 fused_ordering(487) 00:13:26.155 fused_ordering(488) 00:13:26.155 fused_ordering(489) 00:13:26.155 fused_ordering(490) 00:13:26.155 fused_ordering(491) 00:13:26.155 fused_ordering(492) 00:13:26.155 fused_ordering(493) 00:13:26.155 fused_ordering(494) 00:13:26.155 fused_ordering(495) 00:13:26.155 fused_ordering(496) 00:13:26.155 fused_ordering(497) 00:13:26.155 fused_ordering(498) 00:13:26.155 fused_ordering(499) 00:13:26.155 fused_ordering(500) 00:13:26.155 fused_ordering(501) 00:13:26.155 fused_ordering(502) 00:13:26.155 fused_ordering(503) 00:13:26.155 fused_ordering(504) 00:13:26.155 fused_ordering(505) 00:13:26.155 fused_ordering(506) 00:13:26.155 fused_ordering(507) 00:13:26.155 fused_ordering(508) 00:13:26.155 fused_ordering(509) 00:13:26.155 fused_ordering(510) 00:13:26.155 fused_ordering(511) 00:13:26.155 fused_ordering(512) 00:13:26.155 fused_ordering(513) 00:13:26.155 fused_ordering(514) 00:13:26.155 fused_ordering(515) 00:13:26.155 fused_ordering(516) 00:13:26.155 fused_ordering(517) 00:13:26.155 fused_ordering(518) 00:13:26.155 fused_ordering(519) 00:13:26.155 fused_ordering(520) 00:13:26.155 fused_ordering(521) 00:13:26.156 fused_ordering(522) 00:13:26.156 fused_ordering(523) 00:13:26.156 fused_ordering(524) 00:13:26.156 fused_ordering(525) 00:13:26.156 fused_ordering(526) 00:13:26.156 fused_ordering(527) 00:13:26.156 fused_ordering(528) 00:13:26.156 fused_ordering(529) 00:13:26.156 fused_ordering(530) 00:13:26.156 fused_ordering(531) 00:13:26.156 fused_ordering(532) 00:13:26.156 fused_ordering(533) 00:13:26.156 fused_ordering(534) 00:13:26.156 fused_ordering(535) 00:13:26.156 fused_ordering(536) 00:13:26.156 fused_ordering(537) 00:13:26.156 fused_ordering(538) 00:13:26.156 fused_ordering(539) 00:13:26.156 fused_ordering(540) 00:13:26.156 fused_ordering(541) 00:13:26.156 fused_ordering(542) 00:13:26.156 fused_ordering(543) 00:13:26.156 fused_ordering(544) 00:13:26.156 fused_ordering(545) 00:13:26.156 fused_ordering(546) 00:13:26.156 fused_ordering(547) 00:13:26.156 fused_ordering(548) 00:13:26.156 fused_ordering(549) 00:13:26.156 fused_ordering(550) 00:13:26.156 fused_ordering(551) 00:13:26.156 fused_ordering(552) 00:13:26.156 fused_ordering(553) 00:13:26.156 fused_ordering(554) 00:13:26.156 fused_ordering(555) 00:13:26.156 fused_ordering(556) 00:13:26.156 fused_ordering(557) 00:13:26.156 fused_ordering(558) 00:13:26.156 fused_ordering(559) 00:13:26.156 fused_ordering(560) 00:13:26.156 fused_ordering(561) 00:13:26.156 fused_ordering(562) 00:13:26.156 fused_ordering(563) 00:13:26.156 fused_ordering(564) 00:13:26.156 fused_ordering(565) 00:13:26.156 fused_ordering(566) 00:13:26.156 fused_ordering(567) 00:13:26.156 fused_ordering(568) 00:13:26.156 fused_ordering(569) 00:13:26.156 fused_ordering(570) 00:13:26.156 fused_ordering(571) 00:13:26.156 fused_ordering(572) 00:13:26.156 fused_ordering(573) 00:13:26.156 fused_ordering(574) 00:13:26.156 fused_ordering(575) 00:13:26.156 fused_ordering(576) 00:13:26.156 fused_ordering(577) 00:13:26.156 fused_ordering(578) 00:13:26.156 fused_ordering(579) 00:13:26.156 fused_ordering(580) 00:13:26.156 fused_ordering(581) 00:13:26.156 fused_ordering(582) 00:13:26.156 fused_ordering(583) 00:13:26.156 fused_ordering(584) 00:13:26.156 fused_ordering(585) 00:13:26.156 fused_ordering(586) 00:13:26.156 fused_ordering(587) 00:13:26.156 fused_ordering(588) 00:13:26.156 fused_ordering(589) 00:13:26.156 fused_ordering(590) 00:13:26.156 fused_ordering(591) 00:13:26.156 fused_ordering(592) 00:13:26.156 fused_ordering(593) 00:13:26.156 fused_ordering(594) 00:13:26.156 fused_ordering(595) 00:13:26.156 fused_ordering(596) 00:13:26.156 fused_ordering(597) 00:13:26.156 fused_ordering(598) 00:13:26.156 fused_ordering(599) 00:13:26.156 fused_ordering(600) 00:13:26.156 fused_ordering(601) 00:13:26.156 fused_ordering(602) 00:13:26.156 fused_ordering(603) 00:13:26.156 fused_ordering(604) 00:13:26.156 fused_ordering(605) 00:13:26.156 fused_ordering(606) 00:13:26.156 fused_ordering(607) 00:13:26.156 fused_ordering(608) 00:13:26.156 fused_ordering(609) 00:13:26.156 fused_ordering(610) 00:13:26.156 fused_ordering(611) 00:13:26.156 fused_ordering(612) 00:13:26.156 fused_ordering(613) 00:13:26.156 fused_ordering(614) 00:13:26.156 fused_ordering(615) 00:13:27.089 fused_ordering(616) 00:13:27.089 fused_ordering(617) 00:13:27.089 fused_ordering(618) 00:13:27.089 fused_ordering(619) 00:13:27.089 fused_ordering(620) 00:13:27.089 fused_ordering(621) 00:13:27.089 fused_ordering(622) 00:13:27.089 fused_ordering(623) 00:13:27.089 fused_ordering(624) 00:13:27.089 fused_ordering(625) 00:13:27.089 fused_ordering(626) 00:13:27.089 fused_ordering(627) 00:13:27.089 fused_ordering(628) 00:13:27.089 fused_ordering(629) 00:13:27.089 fused_ordering(630) 00:13:27.089 fused_ordering(631) 00:13:27.089 fused_ordering(632) 00:13:27.089 fused_ordering(633) 00:13:27.089 fused_ordering(634) 00:13:27.089 fused_ordering(635) 00:13:27.089 fused_ordering(636) 00:13:27.089 fused_ordering(637) 00:13:27.089 fused_ordering(638) 00:13:27.089 fused_ordering(639) 00:13:27.089 fused_ordering(640) 00:13:27.089 fused_ordering(641) 00:13:27.089 fused_ordering(642) 00:13:27.089 fused_ordering(643) 00:13:27.089 fused_ordering(644) 00:13:27.089 fused_ordering(645) 00:13:27.089 fused_ordering(646) 00:13:27.089 fused_ordering(647) 00:13:27.089 fused_ordering(648) 00:13:27.089 fused_ordering(649) 00:13:27.089 fused_ordering(650) 00:13:27.089 fused_ordering(651) 00:13:27.089 fused_ordering(652) 00:13:27.089 fused_ordering(653) 00:13:27.089 fused_ordering(654) 00:13:27.089 fused_ordering(655) 00:13:27.089 fused_ordering(656) 00:13:27.089 fused_ordering(657) 00:13:27.089 fused_ordering(658) 00:13:27.089 fused_ordering(659) 00:13:27.089 fused_ordering(660) 00:13:27.089 fused_ordering(661) 00:13:27.089 fused_ordering(662) 00:13:27.089 fused_ordering(663) 00:13:27.089 fused_ordering(664) 00:13:27.089 fused_ordering(665) 00:13:27.089 fused_ordering(666) 00:13:27.089 fused_ordering(667) 00:13:27.089 fused_ordering(668) 00:13:27.089 fused_ordering(669) 00:13:27.089 fused_ordering(670) 00:13:27.089 fused_ordering(671) 00:13:27.089 fused_ordering(672) 00:13:27.089 fused_ordering(673) 00:13:27.089 fused_ordering(674) 00:13:27.089 fused_ordering(675) 00:13:27.089 fused_ordering(676) 00:13:27.089 fused_ordering(677) 00:13:27.089 fused_ordering(678) 00:13:27.089 fused_ordering(679) 00:13:27.089 fused_ordering(680) 00:13:27.089 fused_ordering(681) 00:13:27.089 fused_ordering(682) 00:13:27.089 fused_ordering(683) 00:13:27.089 fused_ordering(684) 00:13:27.089 fused_ordering(685) 00:13:27.089 fused_ordering(686) 00:13:27.089 fused_ordering(687) 00:13:27.089 fused_ordering(688) 00:13:27.089 fused_ordering(689) 00:13:27.089 fused_ordering(690) 00:13:27.089 fused_ordering(691) 00:13:27.089 fused_ordering(692) 00:13:27.089 fused_ordering(693) 00:13:27.089 fused_ordering(694) 00:13:27.089 fused_ordering(695) 00:13:27.089 fused_ordering(696) 00:13:27.089 fused_ordering(697) 00:13:27.089 fused_ordering(698) 00:13:27.089 fused_ordering(699) 00:13:27.089 fused_ordering(700) 00:13:27.089 fused_ordering(701) 00:13:27.089 fused_ordering(702) 00:13:27.089 fused_ordering(703) 00:13:27.089 fused_ordering(704) 00:13:27.089 fused_ordering(705) 00:13:27.089 fused_ordering(706) 00:13:27.089 fused_ordering(707) 00:13:27.089 fused_ordering(708) 00:13:27.089 fused_ordering(709) 00:13:27.089 fused_ordering(710) 00:13:27.089 fused_ordering(711) 00:13:27.089 fused_ordering(712) 00:13:27.089 fused_ordering(713) 00:13:27.089 fused_ordering(714) 00:13:27.089 fused_ordering(715) 00:13:27.089 fused_ordering(716) 00:13:27.089 fused_ordering(717) 00:13:27.089 fused_ordering(718) 00:13:27.089 fused_ordering(719) 00:13:27.089 fused_ordering(720) 00:13:27.089 fused_ordering(721) 00:13:27.089 fused_ordering(722) 00:13:27.089 fused_ordering(723) 00:13:27.089 fused_ordering(724) 00:13:27.089 fused_ordering(725) 00:13:27.089 fused_ordering(726) 00:13:27.089 fused_ordering(727) 00:13:27.089 fused_ordering(728) 00:13:27.089 fused_ordering(729) 00:13:27.089 fused_ordering(730) 00:13:27.089 fused_ordering(731) 00:13:27.089 fused_ordering(732) 00:13:27.089 fused_ordering(733) 00:13:27.089 fused_ordering(734) 00:13:27.090 fused_ordering(735) 00:13:27.090 fused_ordering(736) 00:13:27.090 fused_ordering(737) 00:13:27.090 fused_ordering(738) 00:13:27.090 fused_ordering(739) 00:13:27.090 fused_ordering(740) 00:13:27.090 fused_ordering(741) 00:13:27.090 fused_ordering(742) 00:13:27.090 fused_ordering(743) 00:13:27.090 fused_ordering(744) 00:13:27.090 fused_ordering(745) 00:13:27.090 fused_ordering(746) 00:13:27.090 fused_ordering(747) 00:13:27.090 fused_ordering(748) 00:13:27.090 fused_ordering(749) 00:13:27.090 fused_ordering(750) 00:13:27.090 fused_ordering(751) 00:13:27.090 fused_ordering(752) 00:13:27.090 fused_ordering(753) 00:13:27.090 fused_ordering(754) 00:13:27.090 fused_ordering(755) 00:13:27.090 fused_ordering(756) 00:13:27.090 fused_ordering(757) 00:13:27.090 fused_ordering(758) 00:13:27.090 fused_ordering(759) 00:13:27.090 fused_ordering(760) 00:13:27.090 fused_ordering(761) 00:13:27.090 fused_ordering(762) 00:13:27.090 fused_ordering(763) 00:13:27.090 fused_ordering(764) 00:13:27.090 fused_ordering(765) 00:13:27.090 fused_ordering(766) 00:13:27.090 fused_ordering(767) 00:13:27.090 fused_ordering(768) 00:13:27.090 fused_ordering(769) 00:13:27.090 fused_ordering(770) 00:13:27.090 fused_ordering(771) 00:13:27.090 fused_ordering(772) 00:13:27.090 fused_ordering(773) 00:13:27.090 fused_ordering(774) 00:13:27.090 fused_ordering(775) 00:13:27.090 fused_ordering(776) 00:13:27.090 fused_ordering(777) 00:13:27.090 fused_ordering(778) 00:13:27.090 fused_ordering(779) 00:13:27.090 fused_ordering(780) 00:13:27.090 fused_ordering(781) 00:13:27.090 fused_ordering(782) 00:13:27.090 fused_ordering(783) 00:13:27.090 fused_ordering(784) 00:13:27.090 fused_ordering(785) 00:13:27.090 fused_ordering(786) 00:13:27.090 fused_ordering(787) 00:13:27.090 fused_ordering(788) 00:13:27.090 fused_ordering(789) 00:13:27.090 fused_ordering(790) 00:13:27.090 fused_ordering(791) 00:13:27.090 fused_ordering(792) 00:13:27.090 fused_ordering(793) 00:13:27.090 fused_ordering(794) 00:13:27.090 fused_ordering(795) 00:13:27.090 fused_ordering(796) 00:13:27.090 fused_ordering(797) 00:13:27.090 fused_ordering(798) 00:13:27.090 fused_ordering(799) 00:13:27.090 fused_ordering(800) 00:13:27.090 fused_ordering(801) 00:13:27.090 fused_ordering(802) 00:13:27.090 fused_ordering(803) 00:13:27.090 fused_ordering(804) 00:13:27.090 fused_ordering(805) 00:13:27.090 fused_ordering(806) 00:13:27.090 fused_ordering(807) 00:13:27.090 fused_ordering(808) 00:13:27.090 fused_ordering(809) 00:13:27.090 fused_ordering(810) 00:13:27.090 fused_ordering(811) 00:13:27.090 fused_ordering(812) 00:13:27.090 fused_ordering(813) 00:13:27.090 fused_ordering(814) 00:13:27.090 fused_ordering(815) 00:13:27.090 fused_ordering(816) 00:13:27.090 fused_ordering(817) 00:13:27.090 fused_ordering(818) 00:13:27.090 fused_ordering(819) 00:13:27.090 fused_ordering(820) 00:13:28.534 fused_ordering(821) 00:13:28.534 fused_ordering(822) 00:13:28.534 fused_ordering(823) 00:13:28.534 fused_ordering(824) 00:13:28.534 fused_ordering(825) 00:13:28.534 fused_ordering(826) 00:13:28.534 fused_ordering(827) 00:13:28.534 fused_ordering(828) 00:13:28.534 fused_ordering(829) 00:13:28.534 fused_ordering(830) 00:13:28.534 fused_ordering(831) 00:13:28.534 fused_ordering(832) 00:13:28.534 fused_ordering(833) 00:13:28.534 fused_ordering(834) 00:13:28.534 fused_ordering(835) 00:13:28.534 fused_ordering(836) 00:13:28.534 fused_ordering(837) 00:13:28.534 fused_ordering(838) 00:13:28.534 fused_ordering(839) 00:13:28.534 fused_ordering(840) 00:13:28.534 fused_ordering(841) 00:13:28.534 fused_ordering(842) 00:13:28.534 fused_ordering(843) 00:13:28.534 fused_ordering(844) 00:13:28.534 fused_ordering(845) 00:13:28.534 fused_ordering(846) 00:13:28.534 fused_ordering(847) 00:13:28.534 fused_ordering(848) 00:13:28.534 fused_ordering(849) 00:13:28.534 fused_ordering(850) 00:13:28.534 fused_ordering(851) 00:13:28.534 fused_ordering(852) 00:13:28.534 fused_ordering(853) 00:13:28.534 fused_ordering(854) 00:13:28.534 fused_ordering(855) 00:13:28.534 fused_ordering(856) 00:13:28.534 fused_ordering(857) 00:13:28.534 fused_ordering(858) 00:13:28.534 fused_ordering(859) 00:13:28.534 fused_ordering(860) 00:13:28.534 fused_ordering(861) 00:13:28.534 fused_ordering(862) 00:13:28.534 fused_ordering(863) 00:13:28.534 fused_ordering(864) 00:13:28.534 fused_ordering(865) 00:13:28.534 fused_ordering(866) 00:13:28.534 fused_ordering(867) 00:13:28.534 fused_ordering(868) 00:13:28.534 fused_ordering(869) 00:13:28.534 fused_ordering(870) 00:13:28.534 fused_ordering(871) 00:13:28.534 fused_ordering(872) 00:13:28.534 fused_ordering(873) 00:13:28.534 fused_ordering(874) 00:13:28.534 fused_ordering(875) 00:13:28.534 fused_ordering(876) 00:13:28.534 fused_ordering(877) 00:13:28.534 fused_ordering(878) 00:13:28.534 fused_ordering(879) 00:13:28.534 fused_ordering(880) 00:13:28.534 fused_ordering(881) 00:13:28.534 fused_ordering(882) 00:13:28.534 fused_ordering(883) 00:13:28.534 fused_ordering(884) 00:13:28.534 fused_ordering(885) 00:13:28.534 fused_ordering(886) 00:13:28.534 fused_ordering(887) 00:13:28.534 fused_ordering(888) 00:13:28.534 fused_ordering(889) 00:13:28.534 fused_ordering(890) 00:13:28.534 fused_ordering(891) 00:13:28.534 fused_ordering(892) 00:13:28.534 fused_ordering(893) 00:13:28.534 fused_ordering(894) 00:13:28.534 fused_ordering(895) 00:13:28.534 fused_ordering(896) 00:13:28.534 fused_ordering(897) 00:13:28.534 fused_ordering(898) 00:13:28.534 fused_ordering(899) 00:13:28.534 fused_ordering(900) 00:13:28.534 fused_ordering(901) 00:13:28.534 fused_ordering(902) 00:13:28.534 fused_ordering(903) 00:13:28.534 fused_ordering(904) 00:13:28.534 fused_ordering(905) 00:13:28.534 fused_ordering(906) 00:13:28.534 fused_ordering(907) 00:13:28.534 fused_ordering(908) 00:13:28.534 fused_ordering(909) 00:13:28.534 fused_ordering(910) 00:13:28.534 fused_ordering(911) 00:13:28.534 fused_ordering(912) 00:13:28.534 fused_ordering(913) 00:13:28.534 fused_ordering(914) 00:13:28.534 fused_ordering(915) 00:13:28.534 fused_ordering(916) 00:13:28.534 fused_ordering(917) 00:13:28.534 fused_ordering(918) 00:13:28.534 fused_ordering(919) 00:13:28.534 fused_ordering(920) 00:13:28.534 fused_ordering(921) 00:13:28.534 fused_ordering(922) 00:13:28.534 fused_ordering(923) 00:13:28.534 fused_ordering(924) 00:13:28.534 fused_ordering(925) 00:13:28.534 fused_ordering(926) 00:13:28.534 fused_ordering(927) 00:13:28.534 fused_ordering(928) 00:13:28.534 fused_ordering(929) 00:13:28.534 fused_ordering(930) 00:13:28.534 fused_ordering(931) 00:13:28.534 fused_ordering(932) 00:13:28.534 fused_ordering(933) 00:13:28.534 fused_ordering(934) 00:13:28.534 fused_ordering(935) 00:13:28.534 fused_ordering(936) 00:13:28.534 fused_ordering(937) 00:13:28.534 fused_ordering(938) 00:13:28.534 fused_ordering(939) 00:13:28.534 fused_ordering(940) 00:13:28.534 fused_ordering(941) 00:13:28.534 fused_ordering(942) 00:13:28.534 fused_ordering(943) 00:13:28.534 fused_ordering(944) 00:13:28.534 fused_ordering(945) 00:13:28.534 fused_ordering(946) 00:13:28.534 fused_ordering(947) 00:13:28.534 fused_ordering(948) 00:13:28.534 fused_ordering(949) 00:13:28.534 fused_ordering(950) 00:13:28.534 fused_ordering(951) 00:13:28.534 fused_ordering(952) 00:13:28.534 fused_ordering(953) 00:13:28.534 fused_ordering(954) 00:13:28.534 fused_ordering(955) 00:13:28.534 fused_ordering(956) 00:13:28.534 fused_ordering(957) 00:13:28.534 fused_ordering(958) 00:13:28.534 fused_ordering(959) 00:13:28.534 fused_ordering(960) 00:13:28.534 fused_ordering(961) 00:13:28.534 fused_ordering(962) 00:13:28.534 fused_ordering(963) 00:13:28.534 fused_ordering(964) 00:13:28.534 fused_ordering(965) 00:13:28.534 fused_ordering(966) 00:13:28.534 fused_ordering(967) 00:13:28.534 fused_ordering(968) 00:13:28.534 fused_ordering(969) 00:13:28.534 fused_ordering(970) 00:13:28.534 fused_ordering(971) 00:13:28.534 fused_ordering(972) 00:13:28.534 fused_ordering(973) 00:13:28.534 fused_ordering(974) 00:13:28.534 fused_ordering(975) 00:13:28.534 fused_ordering(976) 00:13:28.534 fused_ordering(977) 00:13:28.535 fused_ordering(978) 00:13:28.535 fused_ordering(979) 00:13:28.535 fused_ordering(980) 00:13:28.535 fused_ordering(981) 00:13:28.535 fused_ordering(982) 00:13:28.535 fused_ordering(983) 00:13:28.535 fused_ordering(984) 00:13:28.535 fused_ordering(985) 00:13:28.535 fused_ordering(986) 00:13:28.535 fused_ordering(987) 00:13:28.535 fused_ordering(988) 00:13:28.535 fused_ordering(989) 00:13:28.535 fused_ordering(990) 00:13:28.535 fused_ordering(991) 00:13:28.535 fused_ordering(992) 00:13:28.535 fused_ordering(993) 00:13:28.535 fused_ordering(994) 00:13:28.535 fused_ordering(995) 00:13:28.535 fused_ordering(996) 00:13:28.535 fused_ordering(997) 00:13:28.535 fused_ordering(998) 00:13:28.535 fused_ordering(999) 00:13:28.535 fused_ordering(1000) 00:13:28.535 fused_ordering(1001) 00:13:28.535 fused_ordering(1002) 00:13:28.535 fused_ordering(1003) 00:13:28.535 fused_ordering(1004) 00:13:28.535 fused_ordering(1005) 00:13:28.535 fused_ordering(1006) 00:13:28.535 fused_ordering(1007) 00:13:28.535 fused_ordering(1008) 00:13:28.535 fused_ordering(1009) 00:13:28.535 fused_ordering(1010) 00:13:28.535 fused_ordering(1011) 00:13:28.535 fused_ordering(1012) 00:13:28.535 fused_ordering(1013) 00:13:28.535 fused_ordering(1014) 00:13:28.535 fused_ordering(1015) 00:13:28.535 fused_ordering(1016) 00:13:28.535 fused_ordering(1017) 00:13:28.535 fused_ordering(1018) 00:13:28.535 fused_ordering(1019) 00:13:28.535 fused_ordering(1020) 00:13:28.535 fused_ordering(1021) 00:13:28.535 fused_ordering(1022) 00:13:28.535 fused_ordering(1023) 00:13:28.535 06:50:42 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:28.535 06:50:42 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:28.535 06:50:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:28.535 06:50:42 -- nvmf/common.sh@116 -- # sync 00:13:28.535 06:50:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:28.535 06:50:42 -- nvmf/common.sh@119 -- # set +e 00:13:28.535 06:50:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:28.535 06:50:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:28.535 rmmod nvme_tcp 00:13:28.535 rmmod nvme_fabrics 00:13:28.535 rmmod nvme_keyring 00:13:28.535 06:50:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:28.535 06:50:42 -- nvmf/common.sh@123 -- # set -e 00:13:28.535 06:50:42 -- nvmf/common.sh@124 -- # return 0 00:13:28.535 06:50:42 -- nvmf/common.sh@477 -- # '[' -n 464538 ']' 00:13:28.535 06:50:42 -- nvmf/common.sh@478 -- # killprocess 464538 00:13:28.535 06:50:42 -- common/autotest_common.sh@926 -- # '[' -z 464538 ']' 00:13:28.535 06:50:42 -- common/autotest_common.sh@930 -- # kill -0 464538 00:13:28.535 06:50:42 -- common/autotest_common.sh@931 -- # uname 00:13:28.535 06:50:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:28.535 06:50:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 464538 00:13:28.535 06:50:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:28.535 06:50:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:28.535 06:50:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 464538' 00:13:28.535 killing process with pid 464538 00:13:28.535 06:50:42 -- common/autotest_common.sh@945 -- # kill 464538 00:13:28.535 06:50:42 -- common/autotest_common.sh@950 -- # wait 464538 00:13:28.535 06:50:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:28.535 06:50:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:28.535 06:50:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:28.535 06:50:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.535 06:50:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:28.535 06:50:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.535 06:50:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.535 06:50:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.069 06:50:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:31.069 00:13:31.069 real 0m10.779s 00:13:31.069 user 0m8.228s 00:13:31.069 sys 0m5.652s 00:13:31.069 06:50:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.069 06:50:44 -- common/autotest_common.sh@10 -- # set +x 00:13:31.069 ************************************ 00:13:31.069 END TEST nvmf_fused_ordering 00:13:31.069 ************************************ 00:13:31.069 06:50:44 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:31.069 06:50:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:31.069 06:50:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:31.069 06:50:44 -- common/autotest_common.sh@10 -- # set +x 00:13:31.069 ************************************ 00:13:31.069 START TEST nvmf_delete_subsystem 00:13:31.069 ************************************ 00:13:31.069 06:50:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:31.069 * Looking for test storage... 00:13:31.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:31.069 06:50:44 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.069 06:50:44 -- nvmf/common.sh@7 -- # uname -s 00:13:31.069 06:50:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.069 06:50:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.069 06:50:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.069 06:50:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.069 06:50:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.069 06:50:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.069 06:50:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.069 06:50:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.069 06:50:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.069 06:50:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.069 06:50:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:31.069 06:50:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:31.069 06:50:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.069 06:50:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.069 06:50:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.069 06:50:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.069 06:50:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.069 06:50:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.069 06:50:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.069 06:50:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.069 06:50:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.069 06:50:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.069 06:50:44 -- paths/export.sh@5 -- # export PATH 00:13:31.069 06:50:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.069 06:50:44 -- nvmf/common.sh@46 -- # : 0 00:13:31.069 06:50:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:31.069 06:50:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:31.069 06:50:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:31.069 06:50:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.069 06:50:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.069 06:50:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:31.069 06:50:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:31.069 06:50:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:31.069 06:50:44 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:31.069 06:50:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:31.069 06:50:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.069 06:50:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:31.069 06:50:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:31.069 06:50:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:31.069 06:50:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.069 06:50:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.069 06:50:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.069 06:50:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:31.069 06:50:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:31.069 06:50:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:31.069 06:50:44 -- common/autotest_common.sh@10 -- # set +x 00:13:33.601 06:50:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:33.601 06:50:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:33.601 06:50:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:33.601 06:50:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:33.601 06:50:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:33.601 06:50:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:33.601 06:50:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:33.601 06:50:47 -- nvmf/common.sh@294 -- # net_devs=() 00:13:33.601 06:50:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:33.601 06:50:47 -- nvmf/common.sh@295 -- # e810=() 00:13:33.601 06:50:47 -- nvmf/common.sh@295 -- # local -ga e810 00:13:33.601 06:50:47 -- nvmf/common.sh@296 -- # x722=() 00:13:33.601 06:50:47 -- nvmf/common.sh@296 -- # local -ga x722 00:13:33.601 06:50:47 -- nvmf/common.sh@297 -- # mlx=() 00:13:33.601 06:50:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:33.601 06:50:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.602 06:50:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.602 06:50:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.602 06:50:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.602 06:50:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.602 06:50:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.602 06:50:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.602 06:50:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.602 06:50:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.602 06:50:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.602 06:50:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.602 06:50:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:33.602 06:50:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:33.602 06:50:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:33.602 06:50:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:33.602 06:50:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:33.602 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:33.602 06:50:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:33.602 06:50:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:33.602 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:33.602 06:50:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:33.602 06:50:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:33.602 06:50:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.602 06:50:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:33.602 06:50:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.602 06:50:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:33.602 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:33.602 06:50:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.602 06:50:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:33.602 06:50:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.602 06:50:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:33.602 06:50:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.602 06:50:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:33.602 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:33.602 06:50:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.602 06:50:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:33.602 06:50:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:33.602 06:50:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:33.602 06:50:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.602 06:50:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.602 06:50:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.602 06:50:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:33.602 06:50:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.602 06:50:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.602 06:50:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:33.602 06:50:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.602 06:50:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.602 06:50:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:33.602 06:50:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:33.602 06:50:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.602 06:50:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.602 06:50:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.602 06:50:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.602 06:50:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:33.602 06:50:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.602 06:50:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.602 06:50:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.602 06:50:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:33.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:13:33.602 00:13:33.602 --- 10.0.0.2 ping statistics --- 00:13:33.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.602 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:13:33.602 06:50:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:13:33.602 00:13:33.602 --- 10.0.0.1 ping statistics --- 00:13:33.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.602 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:13:33.602 06:50:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.602 06:50:47 -- nvmf/common.sh@410 -- # return 0 00:13:33.602 06:50:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:33.602 06:50:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.602 06:50:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:33.602 06:50:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.602 06:50:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:33.602 06:50:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:33.602 06:50:47 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:33.602 06:50:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:33.602 06:50:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:33.602 06:50:47 -- common/autotest_common.sh@10 -- # set +x 00:13:33.602 06:50:47 -- nvmf/common.sh@469 -- # nvmfpid=467513 00:13:33.602 06:50:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:33.602 06:50:47 -- nvmf/common.sh@470 -- # waitforlisten 467513 00:13:33.602 06:50:47 -- common/autotest_common.sh@819 -- # '[' -z 467513 ']' 00:13:33.602 06:50:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.602 06:50:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:33.602 06:50:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.602 06:50:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:33.602 06:50:47 -- common/autotest_common.sh@10 -- # set +x 00:13:33.602 [2024-05-15 06:50:47.509322] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:33.602 [2024-05-15 06:50:47.509396] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.602 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.602 [2024-05-15 06:50:47.588615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:33.602 [2024-05-15 06:50:47.696657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:33.602 [2024-05-15 06:50:47.696790] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.602 [2024-05-15 06:50:47.696805] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.602 [2024-05-15 06:50:47.696816] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.602 [2024-05-15 06:50:47.696895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.602 [2024-05-15 06:50:47.696900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.536 06:50:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:34.536 06:50:48 -- common/autotest_common.sh@852 -- # return 0 00:13:34.536 06:50:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:34.536 06:50:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:34.536 06:50:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.536 06:50:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.536 06:50:48 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:34.536 06:50:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.536 06:50:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.536 [2024-05-15 06:50:48.529960] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.536 06:50:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.536 06:50:48 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:34.536 06:50:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.536 06:50:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.536 06:50:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.536 06:50:48 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.536 06:50:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.536 06:50:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.536 [2024-05-15 06:50:48.546129] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.536 06:50:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.536 06:50:48 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:34.536 06:50:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.536 06:50:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.536 NULL1 00:13:34.536 06:50:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.536 06:50:48 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:34.536 06:50:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.536 06:50:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.536 Delay0 00:13:34.536 06:50:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.536 06:50:48 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.536 06:50:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.536 06:50:48 -- common/autotest_common.sh@10 -- # set +x 00:13:34.536 06:50:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.536 06:50:48 -- target/delete_subsystem.sh@28 -- # perf_pid=467650 00:13:34.536 06:50:48 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:34.536 06:50:48 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:34.536 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.536 [2024-05-15 06:50:48.630904] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:36.436 06:50:50 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.436 06:50:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.436 06:50:50 -- common/autotest_common.sh@10 -- # set +x 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 [2024-05-15 06:50:50.842987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7b1400c350 is same with the state(5) to be set 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 starting I/O failed: -6 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.694 Write completed with error (sct=0, sc=8) 00:13:36.694 Read completed with error (sct=0, sc=8) 00:13:36.695 starting I/O failed: -6 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 starting I/O failed: -6 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 starting I/O failed: -6 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 starting I/O failed: -6 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 [2024-05-15 06:50:50.843779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118bc60 is same with the state(5) to be set 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Write completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 [2024-05-15 06:50:50.844259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7b14000c00 is same with the state(5) to be set 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:36.695 Read completed with error (sct=0, sc=8) 00:13:37.640 [2024-05-15 06:50:51.810669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aa5a0 is same with the state(5) to be set 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.640 [2024-05-15 06:50:51.843269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118ae10 is same with the state(5) to be set 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Read completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.640 Write completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Write completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Write completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 [2024-05-15 06:50:51.843563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118b9b0 is same with the state(5) to be set 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Write completed with error (sct=0, sc=8) 00:13:37.641 Write completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Write completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 [2024-05-15 06:50:51.847492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7b1400c600 is same with the state(5) to be set 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Read completed with error (sct=0, sc=8) 00:13:37.641 Write completed with error (sct=0, sc=8) 00:13:37.642 Read completed with error (sct=0, sc=8) 00:13:37.642 Read completed with error (sct=0, sc=8) 00:13:37.642 Read completed with error (sct=0, sc=8) 00:13:37.642 Write completed with error (sct=0, sc=8) 00:13:37.642 Write completed with error (sct=0, sc=8) 00:13:37.642 Read completed with error (sct=0, sc=8) 00:13:37.642 Read completed with error (sct=0, sc=8) 00:13:37.642 Read completed with error (sct=0, sc=8) 00:13:37.642 Write completed with error (sct=0, sc=8) 00:13:37.642 Write completed with error (sct=0, sc=8) 00:13:37.642 Read completed with error (sct=0, sc=8) 00:13:37.642 [2024-05-15 06:50:51.847686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7b1400bf20 is same with the state(5) to be set 00:13:37.642 [2024-05-15 06:50:51.848596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11aa5a0 (9): Bad file descriptor 00:13:37.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:37.642 06:50:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.642 06:50:51 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:37.642 06:50:51 -- target/delete_subsystem.sh@35 -- # kill -0 467650 00:13:37.642 06:50:51 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:37.642 Initializing NVMe Controllers 00:13:37.642 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.642 Controller IO queue size 128, less than required. 00:13:37.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:37.642 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:37.642 Initialization complete. Launching workers. 00:13:37.642 ======================================================== 00:13:37.642 Latency(us) 00:13:37.642 Device Information : IOPS MiB/s Average min max 00:13:37.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 160.27 0.08 919312.03 595.31 1013365.72 00:13:37.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.31 0.08 930011.00 1282.86 1013169.00 00:13:37.643 ======================================================== 00:13:37.643 Total : 315.59 0.15 924577.41 595.31 1013365.72 00:13:37.643 00:13:38.212 06:50:52 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:38.212 06:50:52 -- target/delete_subsystem.sh@35 -- # kill -0 467650 00:13:38.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (467650) - No such process 00:13:38.212 06:50:52 -- target/delete_subsystem.sh@45 -- # NOT wait 467650 00:13:38.212 06:50:52 -- common/autotest_common.sh@640 -- # local es=0 00:13:38.212 06:50:52 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 467650 00:13:38.212 06:50:52 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:38.212 06:50:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:38.212 06:50:52 -- common/autotest_common.sh@632 -- # type -t wait 00:13:38.212 06:50:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:38.212 06:50:52 -- common/autotest_common.sh@643 -- # wait 467650 00:13:38.212 06:50:52 -- common/autotest_common.sh@643 -- # es=1 00:13:38.212 06:50:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:38.212 06:50:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:38.212 06:50:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:38.212 06:50:52 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:38.212 06:50:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.212 06:50:52 -- common/autotest_common.sh@10 -- # set +x 00:13:38.212 06:50:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.212 06:50:52 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.212 06:50:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.212 06:50:52 -- common/autotest_common.sh@10 -- # set +x 00:13:38.212 [2024-05-15 06:50:52.367338] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.212 06:50:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.212 06:50:52 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.212 06:50:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.212 06:50:52 -- common/autotest_common.sh@10 -- # set +x 00:13:38.212 06:50:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.212 06:50:52 -- target/delete_subsystem.sh@54 -- # perf_pid=468170 00:13:38.212 06:50:52 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:38.212 06:50:52 -- target/delete_subsystem.sh@57 -- # kill -0 468170 00:13:38.212 06:50:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:38.212 06:50:52 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:38.212 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.212 [2024-05-15 06:50:52.432809] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:38.776 06:50:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:38.776 06:50:52 -- target/delete_subsystem.sh@57 -- # kill -0 468170 00:13:38.776 06:50:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:39.342 06:50:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:39.342 06:50:53 -- target/delete_subsystem.sh@57 -- # kill -0 468170 00:13:39.342 06:50:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:39.907 06:50:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:39.907 06:50:53 -- target/delete_subsystem.sh@57 -- # kill -0 468170 00:13:39.907 06:50:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:40.165 06:50:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:40.165 06:50:54 -- target/delete_subsystem.sh@57 -- # kill -0 468170 00:13:40.165 06:50:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:40.731 06:50:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:40.731 06:50:54 -- target/delete_subsystem.sh@57 -- # kill -0 468170 00:13:40.731 06:50:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:41.297 06:50:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:41.297 06:50:55 -- target/delete_subsystem.sh@57 -- # kill -0 468170 00:13:41.297 06:50:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:41.555 Initializing NVMe Controllers 00:13:41.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:41.555 Controller IO queue size 128, less than required. 00:13:41.555 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:41.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:41.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:41.556 Initialization complete. Launching workers. 00:13:41.556 ======================================================== 00:13:41.556 Latency(us) 00:13:41.556 Device Information : IOPS MiB/s Average min max 00:13:41.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004546.40 1000242.20 1012467.55 00:13:41.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005174.63 1000230.03 1011192.22 00:13:41.556 ======================================================== 00:13:41.556 Total : 256.00 0.12 1004860.52 1000230.03 1012467.55 00:13:41.556 00:13:41.814 06:50:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:41.814 06:50:55 -- target/delete_subsystem.sh@57 -- # kill -0 468170 00:13:41.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (468170) - No such process 00:13:41.814 06:50:55 -- target/delete_subsystem.sh@67 -- # wait 468170 00:13:41.814 06:50:55 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:41.814 06:50:55 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:41.814 06:50:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:41.814 06:50:55 -- nvmf/common.sh@116 -- # sync 00:13:41.814 06:50:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:41.814 06:50:55 -- nvmf/common.sh@119 -- # set +e 00:13:41.814 06:50:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:41.814 06:50:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:41.814 rmmod nvme_tcp 00:13:41.814 rmmod nvme_fabrics 00:13:41.814 rmmod nvme_keyring 00:13:41.814 06:50:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:41.814 06:50:55 -- nvmf/common.sh@123 -- # set -e 00:13:41.814 06:50:55 -- nvmf/common.sh@124 -- # return 0 00:13:41.814 06:50:55 -- nvmf/common.sh@477 -- # '[' -n 467513 ']' 00:13:41.814 06:50:55 -- nvmf/common.sh@478 -- # killprocess 467513 00:13:41.814 06:50:55 -- common/autotest_common.sh@926 -- # '[' -z 467513 ']' 00:13:41.814 06:50:55 -- common/autotest_common.sh@930 -- # kill -0 467513 00:13:41.814 06:50:55 -- common/autotest_common.sh@931 -- # uname 00:13:41.814 06:50:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:41.814 06:50:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 467513 00:13:41.814 06:50:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:41.814 06:50:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:41.814 06:50:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 467513' 00:13:41.814 killing process with pid 467513 00:13:41.814 06:50:55 -- common/autotest_common.sh@945 -- # kill 467513 00:13:41.814 06:50:55 -- common/autotest_common.sh@950 -- # wait 467513 00:13:42.073 06:50:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:42.073 06:50:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:42.073 06:50:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:42.073 06:50:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.073 06:50:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:42.073 06:50:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.073 06:50:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.073 06:50:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.618 06:50:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:44.618 00:13:44.618 real 0m13.559s 00:13:44.618 user 0m29.812s 00:13:44.618 sys 0m3.379s 00:13:44.618 06:50:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.618 06:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:44.618 ************************************ 00:13:44.618 END TEST nvmf_delete_subsystem 00:13:44.618 ************************************ 00:13:44.618 06:50:58 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:13:44.618 06:50:58 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:44.618 06:50:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:44.618 06:50:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:44.618 06:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:44.618 ************************************ 00:13:44.618 START TEST nvmf_nvme_cli 00:13:44.618 ************************************ 00:13:44.618 06:50:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:44.618 * Looking for test storage... 00:13:44.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.618 06:50:58 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.618 06:50:58 -- nvmf/common.sh@7 -- # uname -s 00:13:44.618 06:50:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.619 06:50:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.619 06:50:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.619 06:50:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.619 06:50:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.619 06:50:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.619 06:50:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.619 06:50:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.619 06:50:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.619 06:50:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.619 06:50:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.619 06:50:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.619 06:50:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.619 06:50:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.619 06:50:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.619 06:50:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.619 06:50:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.619 06:50:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.619 06:50:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.619 06:50:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.619 06:50:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.619 06:50:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.619 06:50:58 -- paths/export.sh@5 -- # export PATH 00:13:44.619 06:50:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.619 06:50:58 -- nvmf/common.sh@46 -- # : 0 00:13:44.619 06:50:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:44.619 06:50:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:44.619 06:50:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:44.619 06:50:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.619 06:50:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.619 06:50:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:44.619 06:50:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:44.619 06:50:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:44.619 06:50:58 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.619 06:50:58 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.619 06:50:58 -- target/nvme_cli.sh@14 -- # devs=() 00:13:44.619 06:50:58 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:44.619 06:50:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:44.619 06:50:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.619 06:50:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:44.619 06:50:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:44.619 06:50:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:44.619 06:50:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.619 06:50:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.619 06:50:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.619 06:50:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:44.619 06:50:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:44.619 06:50:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:44.619 06:50:58 -- common/autotest_common.sh@10 -- # set +x 00:13:47.146 06:51:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:47.146 06:51:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:47.146 06:51:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:47.146 06:51:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:47.146 06:51:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:47.146 06:51:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:47.146 06:51:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:47.146 06:51:00 -- nvmf/common.sh@294 -- # net_devs=() 00:13:47.146 06:51:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:47.146 06:51:00 -- nvmf/common.sh@295 -- # e810=() 00:13:47.146 06:51:00 -- nvmf/common.sh@295 -- # local -ga e810 00:13:47.146 06:51:00 -- nvmf/common.sh@296 -- # x722=() 00:13:47.146 06:51:00 -- nvmf/common.sh@296 -- # local -ga x722 00:13:47.146 06:51:00 -- nvmf/common.sh@297 -- # mlx=() 00:13:47.146 06:51:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:47.146 06:51:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.146 06:51:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.146 06:51:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.146 06:51:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.146 06:51:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.146 06:51:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.146 06:51:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.146 06:51:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.146 06:51:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.146 06:51:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.146 06:51:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.146 06:51:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:47.146 06:51:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:47.146 06:51:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:47.146 06:51:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:47.146 06:51:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:47.146 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:47.146 06:51:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:47.146 06:51:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:47.146 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:47.146 06:51:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:47.146 06:51:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:47.146 06:51:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.146 06:51:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:47.146 06:51:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.146 06:51:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:47.146 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:47.146 06:51:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.146 06:51:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:47.146 06:51:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.146 06:51:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:47.146 06:51:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.146 06:51:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:47.146 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:47.146 06:51:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.146 06:51:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:47.146 06:51:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:47.146 06:51:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:47.146 06:51:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:47.146 06:51:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.146 06:51:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.146 06:51:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.146 06:51:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:47.146 06:51:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.146 06:51:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.146 06:51:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:47.146 06:51:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.146 06:51:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.146 06:51:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:47.146 06:51:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:47.146 06:51:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.146 06:51:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.146 06:51:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.146 06:51:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.146 06:51:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:47.146 06:51:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.146 06:51:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.146 06:51:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.146 06:51:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:47.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:13:47.146 00:13:47.146 --- 10.0.0.2 ping statistics --- 00:13:47.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.146 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:13:47.146 06:51:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:13:47.146 00:13:47.146 --- 10.0.0.1 ping statistics --- 00:13:47.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.147 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:47.147 06:51:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.147 06:51:01 -- nvmf/common.sh@410 -- # return 0 00:13:47.147 06:51:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:47.147 06:51:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.147 06:51:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:47.147 06:51:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:47.147 06:51:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.147 06:51:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:47.147 06:51:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:47.147 06:51:01 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:47.147 06:51:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:47.147 06:51:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:47.147 06:51:01 -- common/autotest_common.sh@10 -- # set +x 00:13:47.147 06:51:01 -- nvmf/common.sh@469 -- # nvmfpid=470927 00:13:47.147 06:51:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.147 06:51:01 -- nvmf/common.sh@470 -- # waitforlisten 470927 00:13:47.147 06:51:01 -- common/autotest_common.sh@819 -- # '[' -z 470927 ']' 00:13:47.147 06:51:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.147 06:51:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:47.147 06:51:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.147 06:51:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:47.147 06:51:01 -- common/autotest_common.sh@10 -- # set +x 00:13:47.147 [2024-05-15 06:51:01.117564] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:47.147 [2024-05-15 06:51:01.117635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.147 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.147 [2024-05-15 06:51:01.197129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.147 [2024-05-15 06:51:01.305085] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:47.147 [2024-05-15 06:51:01.305243] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.147 [2024-05-15 06:51:01.305260] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.147 [2024-05-15 06:51:01.305273] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.147 [2024-05-15 06:51:01.305342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.147 [2024-05-15 06:51:01.305405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.147 [2024-05-15 06:51:01.305470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.147 [2024-05-15 06:51:01.305473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.095 06:51:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:48.095 06:51:02 -- common/autotest_common.sh@852 -- # return 0 00:13:48.095 06:51:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:48.095 06:51:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:48.095 06:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.095 06:51:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.095 06:51:02 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.095 06:51:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.095 06:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.095 [2024-05-15 06:51:02.089442] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.095 06:51:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.095 06:51:02 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:48.095 06:51:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.095 06:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.095 Malloc0 00:13:48.095 06:51:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.095 06:51:02 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:48.095 06:51:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.095 06:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.095 Malloc1 00:13:48.095 06:51:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.095 06:51:02 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:48.095 06:51:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.095 06:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.095 06:51:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.095 06:51:02 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:48.095 06:51:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.095 06:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.095 06:51:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.095 06:51:02 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:48.095 06:51:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.095 06:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.095 06:51:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.095 06:51:02 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.095 06:51:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.095 06:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.095 [2024-05-15 06:51:02.170574] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.095 06:51:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.095 06:51:02 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:48.095 06:51:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.095 06:51:02 -- common/autotest_common.sh@10 -- # set +x 00:13:48.095 06:51:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.095 06:51:02 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:48.095 00:13:48.095 Discovery Log Number of Records 2, Generation counter 2 00:13:48.095 =====Discovery Log Entry 0====== 00:13:48.095 trtype: tcp 00:13:48.095 adrfam: ipv4 00:13:48.095 subtype: current discovery subsystem 00:13:48.095 treq: not required 00:13:48.095 portid: 0 00:13:48.095 trsvcid: 4420 00:13:48.095 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:48.095 traddr: 10.0.0.2 00:13:48.095 eflags: explicit discovery connections, duplicate discovery information 00:13:48.095 sectype: none 00:13:48.095 =====Discovery Log Entry 1====== 00:13:48.095 trtype: tcp 00:13:48.095 adrfam: ipv4 00:13:48.095 subtype: nvme subsystem 00:13:48.095 treq: not required 00:13:48.095 portid: 0 00:13:48.095 trsvcid: 4420 00:13:48.095 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:48.095 traddr: 10.0.0.2 00:13:48.095 eflags: none 00:13:48.095 sectype: none 00:13:48.095 06:51:02 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:48.095 06:51:02 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:48.095 06:51:02 -- nvmf/common.sh@510 -- # local dev _ 00:13:48.095 06:51:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:48.095 06:51:02 -- nvmf/common.sh@509 -- # nvme list 00:13:48.095 06:51:02 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:48.095 06:51:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:48.095 06:51:02 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:48.095 06:51:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:48.095 06:51:02 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:48.095 06:51:02 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:48.661 06:51:02 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:48.661 06:51:02 -- common/autotest_common.sh@1177 -- # local i=0 00:13:48.661 06:51:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.661 06:51:02 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:13:48.661 06:51:02 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:13:48.661 06:51:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:51.188 06:51:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:51.188 06:51:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:51.188 06:51:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:51.188 06:51:04 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:13:51.188 06:51:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.188 06:51:04 -- common/autotest_common.sh@1187 -- # return 0 00:13:51.188 06:51:04 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:51.188 06:51:04 -- nvmf/common.sh@510 -- # local dev _ 00:13:51.188 06:51:04 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:51.188 06:51:04 -- nvmf/common.sh@509 -- # nvme list 00:13:51.188 06:51:05 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:51.188 06:51:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:51.188 06:51:05 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:51.188 06:51:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:51.188 06:51:05 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:51.188 06:51:05 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:51.188 06:51:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:51.188 06:51:05 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:51.188 06:51:05 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:51.188 06:51:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:51.188 06:51:05 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:51.188 /dev/nvme0n1 ]] 00:13:51.188 06:51:05 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:51.188 06:51:05 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:51.188 06:51:05 -- nvmf/common.sh@510 -- # local dev _ 00:13:51.188 06:51:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:51.188 06:51:05 -- nvmf/common.sh@509 -- # nvme list 00:13:51.188 06:51:05 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:51.188 06:51:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:51.188 06:51:05 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:51.188 06:51:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:51.188 06:51:05 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:51.188 06:51:05 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:51.188 06:51:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:51.188 06:51:05 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:51.188 06:51:05 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:51.188 06:51:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:51.188 06:51:05 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:51.188 06:51:05 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:51.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.447 06:51:05 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:51.447 06:51:05 -- common/autotest_common.sh@1198 -- # local i=0 00:13:51.447 06:51:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:51.447 06:51:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.447 06:51:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:51.447 06:51:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:51.447 06:51:05 -- common/autotest_common.sh@1210 -- # return 0 00:13:51.447 06:51:05 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:51.447 06:51:05 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:51.447 06:51:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.447 06:51:05 -- common/autotest_common.sh@10 -- # set +x 00:13:51.447 06:51:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.447 06:51:05 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:51.447 06:51:05 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:51.447 06:51:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:51.447 06:51:05 -- nvmf/common.sh@116 -- # sync 00:13:51.447 06:51:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:51.447 06:51:05 -- nvmf/common.sh@119 -- # set +e 00:13:51.447 06:51:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:51.447 06:51:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:51.447 rmmod nvme_tcp 00:13:51.447 rmmod nvme_fabrics 00:13:51.447 rmmod nvme_keyring 00:13:51.447 06:51:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:51.447 06:51:05 -- nvmf/common.sh@123 -- # set -e 00:13:51.447 06:51:05 -- nvmf/common.sh@124 -- # return 0 00:13:51.447 06:51:05 -- nvmf/common.sh@477 -- # '[' -n 470927 ']' 00:13:51.447 06:51:05 -- nvmf/common.sh@478 -- # killprocess 470927 00:13:51.447 06:51:05 -- common/autotest_common.sh@926 -- # '[' -z 470927 ']' 00:13:51.447 06:51:05 -- common/autotest_common.sh@930 -- # kill -0 470927 00:13:51.447 06:51:05 -- common/autotest_common.sh@931 -- # uname 00:13:51.447 06:51:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:51.447 06:51:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 470927 00:13:51.447 06:51:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:51.447 06:51:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:51.447 06:51:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 470927' 00:13:51.447 killing process with pid 470927 00:13:51.447 06:51:05 -- common/autotest_common.sh@945 -- # kill 470927 00:13:51.447 06:51:05 -- common/autotest_common.sh@950 -- # wait 470927 00:13:51.706 06:51:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:51.706 06:51:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:51.706 06:51:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:51.706 06:51:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.706 06:51:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:51.706 06:51:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.706 06:51:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.706 06:51:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.239 06:51:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:54.239 00:13:54.239 real 0m9.609s 00:13:54.239 user 0m18.928s 00:13:54.239 sys 0m2.561s 00:13:54.239 06:51:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:54.239 06:51:07 -- common/autotest_common.sh@10 -- # set +x 00:13:54.239 ************************************ 00:13:54.239 END TEST nvmf_nvme_cli 00:13:54.239 ************************************ 00:13:54.239 06:51:07 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:54.239 06:51:07 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:54.239 06:51:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:54.239 06:51:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:54.239 06:51:07 -- common/autotest_common.sh@10 -- # set +x 00:13:54.239 ************************************ 00:13:54.240 START TEST nvmf_host_management 00:13:54.240 ************************************ 00:13:54.240 06:51:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:54.240 * Looking for test storage... 00:13:54.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.240 06:51:08 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.240 06:51:08 -- nvmf/common.sh@7 -- # uname -s 00:13:54.240 06:51:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.240 06:51:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.240 06:51:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.240 06:51:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.240 06:51:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.240 06:51:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.240 06:51:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.240 06:51:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.240 06:51:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.240 06:51:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.240 06:51:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.240 06:51:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:54.240 06:51:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.240 06:51:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.240 06:51:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.240 06:51:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.240 06:51:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.240 06:51:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.240 06:51:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.240 06:51:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.240 06:51:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.240 06:51:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.240 06:51:08 -- paths/export.sh@5 -- # export PATH 00:13:54.240 06:51:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.240 06:51:08 -- nvmf/common.sh@46 -- # : 0 00:13:54.240 06:51:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:54.240 06:51:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:54.240 06:51:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:54.240 06:51:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.240 06:51:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.240 06:51:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:54.240 06:51:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:54.240 06:51:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:54.240 06:51:08 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.240 06:51:08 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.240 06:51:08 -- target/host_management.sh@104 -- # nvmftestinit 00:13:54.240 06:51:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:54.240 06:51:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.240 06:51:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:54.240 06:51:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:54.240 06:51:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:54.240 06:51:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.240 06:51:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.240 06:51:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.240 06:51:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:54.240 06:51:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:54.240 06:51:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:54.240 06:51:08 -- common/autotest_common.sh@10 -- # set +x 00:13:56.774 06:51:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:56.774 06:51:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:56.774 06:51:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:56.774 06:51:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:56.774 06:51:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:56.775 06:51:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:56.775 06:51:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:56.775 06:51:10 -- nvmf/common.sh@294 -- # net_devs=() 00:13:56.775 06:51:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:56.775 06:51:10 -- nvmf/common.sh@295 -- # e810=() 00:13:56.775 06:51:10 -- nvmf/common.sh@295 -- # local -ga e810 00:13:56.775 06:51:10 -- nvmf/common.sh@296 -- # x722=() 00:13:56.775 06:51:10 -- nvmf/common.sh@296 -- # local -ga x722 00:13:56.775 06:51:10 -- nvmf/common.sh@297 -- # mlx=() 00:13:56.775 06:51:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:56.775 06:51:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.775 06:51:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.775 06:51:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.775 06:51:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.775 06:51:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.775 06:51:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.775 06:51:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.775 06:51:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.775 06:51:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.775 06:51:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.775 06:51:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.775 06:51:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:56.775 06:51:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:56.775 06:51:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:56.775 06:51:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:56.775 06:51:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:56.775 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:56.775 06:51:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:56.775 06:51:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:56.775 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:56.775 06:51:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:56.775 06:51:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:56.775 06:51:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.775 06:51:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:56.775 06:51:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.775 06:51:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:56.775 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:56.775 06:51:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.775 06:51:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:56.775 06:51:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.775 06:51:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:56.775 06:51:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.775 06:51:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:56.775 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:56.775 06:51:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.775 06:51:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:56.775 06:51:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:56.775 06:51:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:56.775 06:51:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.775 06:51:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.775 06:51:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.775 06:51:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:56.775 06:51:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.775 06:51:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.775 06:51:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:56.775 06:51:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.775 06:51:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.775 06:51:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:56.775 06:51:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:56.775 06:51:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.775 06:51:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.775 06:51:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.775 06:51:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.775 06:51:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:56.775 06:51:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.775 06:51:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.775 06:51:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.775 06:51:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:56.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:13:56.775 00:13:56.775 --- 10.0.0.2 ping statistics --- 00:13:56.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.775 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:13:56.775 06:51:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:13:56.775 00:13:56.775 --- 10.0.0.1 ping statistics --- 00:13:56.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.775 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:13:56.775 06:51:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.775 06:51:10 -- nvmf/common.sh@410 -- # return 0 00:13:56.775 06:51:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:56.775 06:51:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.775 06:51:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:56.775 06:51:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.775 06:51:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:56.775 06:51:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:56.775 06:51:10 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:56.775 06:51:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:56.775 06:51:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:56.775 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:13:56.775 ************************************ 00:13:56.775 START TEST nvmf_host_management 00:13:56.775 ************************************ 00:13:56.775 06:51:10 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:13:56.775 06:51:10 -- target/host_management.sh@69 -- # starttarget 00:13:56.775 06:51:10 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:56.775 06:51:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:56.775 06:51:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:56.775 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:13:56.775 06:51:10 -- nvmf/common.sh@469 -- # nvmfpid=474397 00:13:56.775 06:51:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:56.775 06:51:10 -- nvmf/common.sh@470 -- # waitforlisten 474397 00:13:56.775 06:51:10 -- common/autotest_common.sh@819 -- # '[' -z 474397 ']' 00:13:56.775 06:51:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.775 06:51:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:56.775 06:51:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.775 06:51:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:56.775 06:51:10 -- common/autotest_common.sh@10 -- # set +x 00:13:56.775 [2024-05-15 06:51:10.601552] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:56.775 [2024-05-15 06:51:10.601641] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.775 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.775 [2024-05-15 06:51:10.682160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.775 [2024-05-15 06:51:10.803415] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:56.775 [2024-05-15 06:51:10.803575] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.775 [2024-05-15 06:51:10.803594] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.775 [2024-05-15 06:51:10.803608] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.775 [2024-05-15 06:51:10.803692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.775 [2024-05-15 06:51:10.803745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.775 [2024-05-15 06:51:10.803794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:56.775 [2024-05-15 06:51:10.803797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.709 06:51:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:57.709 06:51:11 -- common/autotest_common.sh@852 -- # return 0 00:13:57.709 06:51:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:57.709 06:51:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:57.709 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:13:57.709 06:51:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.709 06:51:11 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:57.709 06:51:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.709 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:13:57.710 [2024-05-15 06:51:11.608488] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.710 06:51:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.710 06:51:11 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:57.710 06:51:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:57.710 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:13:57.710 06:51:11 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:57.710 06:51:11 -- target/host_management.sh@23 -- # cat 00:13:57.710 06:51:11 -- target/host_management.sh@30 -- # rpc_cmd 00:13:57.710 06:51:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.710 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:13:57.710 Malloc0 00:13:57.710 [2024-05-15 06:51:11.674271] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.710 06:51:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.710 06:51:11 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:57.710 06:51:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:57.710 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:13:57.710 06:51:11 -- target/host_management.sh@73 -- # perfpid=474576 00:13:57.710 06:51:11 -- target/host_management.sh@74 -- # waitforlisten 474576 /var/tmp/bdevperf.sock 00:13:57.710 06:51:11 -- common/autotest_common.sh@819 -- # '[' -z 474576 ']' 00:13:57.710 06:51:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.710 06:51:11 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:57.710 06:51:11 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:57.710 06:51:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:57.710 06:51:11 -- nvmf/common.sh@520 -- # config=() 00:13:57.710 06:51:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.710 06:51:11 -- nvmf/common.sh@520 -- # local subsystem config 00:13:57.710 06:51:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:57.710 06:51:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:57.710 06:51:11 -- common/autotest_common.sh@10 -- # set +x 00:13:57.710 06:51:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:57.710 { 00:13:57.710 "params": { 00:13:57.710 "name": "Nvme$subsystem", 00:13:57.710 "trtype": "$TEST_TRANSPORT", 00:13:57.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:57.710 "adrfam": "ipv4", 00:13:57.710 "trsvcid": "$NVMF_PORT", 00:13:57.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:57.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:57.710 "hdgst": ${hdgst:-false}, 00:13:57.710 "ddgst": ${ddgst:-false} 00:13:57.710 }, 00:13:57.710 "method": "bdev_nvme_attach_controller" 00:13:57.710 } 00:13:57.710 EOF 00:13:57.710 )") 00:13:57.710 06:51:11 -- nvmf/common.sh@542 -- # cat 00:13:57.710 06:51:11 -- nvmf/common.sh@544 -- # jq . 00:13:57.710 06:51:11 -- nvmf/common.sh@545 -- # IFS=, 00:13:57.710 06:51:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:57.710 "params": { 00:13:57.710 "name": "Nvme0", 00:13:57.710 "trtype": "tcp", 00:13:57.710 "traddr": "10.0.0.2", 00:13:57.710 "adrfam": "ipv4", 00:13:57.710 "trsvcid": "4420", 00:13:57.710 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:57.710 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:57.710 "hdgst": false, 00:13:57.710 "ddgst": false 00:13:57.710 }, 00:13:57.710 "method": "bdev_nvme_attach_controller" 00:13:57.710 }' 00:13:57.710 [2024-05-15 06:51:11.749210] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:57.710 [2024-05-15 06:51:11.749306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474576 ] 00:13:57.710 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.710 [2024-05-15 06:51:11.821097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.710 [2024-05-15 06:51:11.929239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.276 Running I/O for 10 seconds... 00:13:58.537 06:51:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:58.537 06:51:12 -- common/autotest_common.sh@852 -- # return 0 00:13:58.537 06:51:12 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:58.537 06:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.537 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:13:58.537 06:51:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.537 06:51:12 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:58.537 06:51:12 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:58.537 06:51:12 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:58.537 06:51:12 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:58.537 06:51:12 -- target/host_management.sh@52 -- # local ret=1 00:13:58.537 06:51:12 -- target/host_management.sh@53 -- # local i 00:13:58.537 06:51:12 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:58.537 06:51:12 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:58.537 06:51:12 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:58.537 06:51:12 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:58.537 06:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.537 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:13:58.537 06:51:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.537 06:51:12 -- target/host_management.sh@55 -- # read_io_count=812 00:13:58.537 06:51:12 -- target/host_management.sh@58 -- # '[' 812 -ge 100 ']' 00:13:58.537 06:51:12 -- target/host_management.sh@59 -- # ret=0 00:13:58.537 06:51:12 -- target/host_management.sh@60 -- # break 00:13:58.537 06:51:12 -- target/host_management.sh@64 -- # return 0 00:13:58.537 06:51:12 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:58.537 06:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.537 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:13:58.537 [2024-05-15 06:51:12.737911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.537 [2024-05-15 06:51:12.738754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.538 [2024-05-15 06:51:12.738766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.538 [2024-05-15 06:51:12.738779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.538 [2024-05-15 06:51:12.738791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc09480 is same with the state(5) to be set 00:13:58.538 [2024-05-15 06:51:12.739444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.739975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.739990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.538 [2024-05-15 06:51:12.740669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.538 [2024-05-15 06:51:12.740684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.740699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.740714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.740729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.740744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.740759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.740774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.740788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.740803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.740818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.740833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.740847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.740863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.740877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.740893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.740907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.740923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.740944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.740965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.740989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:58.539 [2024-05-15 06:51:12.741440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:58.539 [2024-05-15 06:51:12.741454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2554b50 is same with the state(5) to be set 00:13:58.539 [2024-05-15 06:51:12.741532] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2554b50 was disconnected and freed. reset controller. 00:13:58.539 06:51:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.539 06:51:12 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:58.539 [2024-05-15 06:51:12.742667] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:58.539 06:51:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.539 06:51:12 -- common/autotest_common.sh@10 -- # set +x 00:13:58.539 task offset: 110208 on job bdev=Nvme0n1 fails 00:13:58.539 00:13:58.539 Latency(us) 00:13:58.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.539 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:58.539 Job: Nvme0n1 ended in about 0.47 seconds with error 00:13:58.539 Verification LBA range: start 0x0 length 0x400 00:13:58.539 Nvme0n1 : 0.47 1883.54 117.72 135.75 0.00 31284.18 3155.44 37671.06 00:13:58.539 =================================================================================================================== 00:13:58.539 Total : 1883.54 117.72 135.75 0.00 31284.18 3155.44 37671.06 00:13:58.539 [2024-05-15 06:51:12.744592] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:58.539 [2024-05-15 06:51:12.744619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2557400 (9): Bad file descriptor 00:13:58.539 06:51:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.539 06:51:12 -- target/host_management.sh@87 -- # sleep 1 00:13:58.539 [2024-05-15 06:51:12.751995] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:59.927 06:51:13 -- target/host_management.sh@91 -- # kill -9 474576 00:13:59.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (474576) - No such process 00:13:59.927 06:51:13 -- target/host_management.sh@91 -- # true 00:13:59.927 06:51:13 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:59.927 06:51:13 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:59.927 06:51:13 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:59.927 06:51:13 -- nvmf/common.sh@520 -- # config=() 00:13:59.927 06:51:13 -- nvmf/common.sh@520 -- # local subsystem config 00:13:59.927 06:51:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:59.927 06:51:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:59.927 { 00:13:59.927 "params": { 00:13:59.927 "name": "Nvme$subsystem", 00:13:59.927 "trtype": "$TEST_TRANSPORT", 00:13:59.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:59.927 "adrfam": "ipv4", 00:13:59.927 "trsvcid": "$NVMF_PORT", 00:13:59.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:59.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:59.927 "hdgst": ${hdgst:-false}, 00:13:59.927 "ddgst": ${ddgst:-false} 00:13:59.927 }, 00:13:59.927 "method": "bdev_nvme_attach_controller" 00:13:59.927 } 00:13:59.927 EOF 00:13:59.927 )") 00:13:59.927 06:51:13 -- nvmf/common.sh@542 -- # cat 00:13:59.927 06:51:13 -- nvmf/common.sh@544 -- # jq . 00:13:59.927 06:51:13 -- nvmf/common.sh@545 -- # IFS=, 00:13:59.927 06:51:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:59.927 "params": { 00:13:59.927 "name": "Nvme0", 00:13:59.927 "trtype": "tcp", 00:13:59.927 "traddr": "10.0.0.2", 00:13:59.927 "adrfam": "ipv4", 00:13:59.927 "trsvcid": "4420", 00:13:59.927 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:59.927 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:59.927 "hdgst": false, 00:13:59.927 "ddgst": false 00:13:59.927 }, 00:13:59.927 "method": "bdev_nvme_attach_controller" 00:13:59.927 }' 00:13:59.927 [2024-05-15 06:51:13.793181] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:59.927 [2024-05-15 06:51:13.793289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474863 ] 00:13:59.927 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.927 [2024-05-15 06:51:13.866810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.927 [2024-05-15 06:51:13.973305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.185 Running I/O for 1 seconds... 00:14:01.120 00:14:01.120 Latency(us) 00:14:01.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:01.120 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:01.120 Verification LBA range: start 0x0 length 0x400 00:14:01.120 Nvme0n1 : 1.01 2557.55 159.85 0.00 0.00 24666.86 3907.89 33593.27 00:14:01.120 =================================================================================================================== 00:14:01.120 Total : 2557.55 159.85 0.00 0.00 24666.86 3907.89 33593.27 00:14:01.380 06:51:15 -- target/host_management.sh@101 -- # stoptarget 00:14:01.380 06:51:15 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:01.380 06:51:15 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:01.380 06:51:15 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:01.380 06:51:15 -- target/host_management.sh@40 -- # nvmftestfini 00:14:01.380 06:51:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:01.380 06:51:15 -- nvmf/common.sh@116 -- # sync 00:14:01.380 06:51:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:01.380 06:51:15 -- nvmf/common.sh@119 -- # set +e 00:14:01.380 06:51:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:01.380 06:51:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:01.380 rmmod nvme_tcp 00:14:01.380 rmmod nvme_fabrics 00:14:01.380 rmmod nvme_keyring 00:14:01.380 06:51:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:01.380 06:51:15 -- nvmf/common.sh@123 -- # set -e 00:14:01.380 06:51:15 -- nvmf/common.sh@124 -- # return 0 00:14:01.380 06:51:15 -- nvmf/common.sh@477 -- # '[' -n 474397 ']' 00:14:01.380 06:51:15 -- nvmf/common.sh@478 -- # killprocess 474397 00:14:01.380 06:51:15 -- common/autotest_common.sh@926 -- # '[' -z 474397 ']' 00:14:01.380 06:51:15 -- common/autotest_common.sh@930 -- # kill -0 474397 00:14:01.380 06:51:15 -- common/autotest_common.sh@931 -- # uname 00:14:01.380 06:51:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:01.380 06:51:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 474397 00:14:01.639 06:51:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:01.639 06:51:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:01.639 06:51:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 474397' 00:14:01.639 killing process with pid 474397 00:14:01.639 06:51:15 -- common/autotest_common.sh@945 -- # kill 474397 00:14:01.639 06:51:15 -- common/autotest_common.sh@950 -- # wait 474397 00:14:01.899 [2024-05-15 06:51:15.891419] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:01.899 06:51:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:01.899 06:51:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:01.899 06:51:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:01.899 06:51:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.899 06:51:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:01.899 06:51:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.899 06:51:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.899 06:51:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.804 06:51:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:03.804 00:14:03.804 real 0m7.410s 00:14:03.804 user 0m23.008s 00:14:03.804 sys 0m1.366s 00:14:03.804 06:51:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.804 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:14:03.804 ************************************ 00:14:03.804 END TEST nvmf_host_management 00:14:03.804 ************************************ 00:14:03.804 06:51:17 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:03.804 00:14:03.804 real 0m10.024s 00:14:03.804 user 0m23.942s 00:14:03.804 sys 0m3.086s 00:14:03.804 06:51:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.804 06:51:17 -- common/autotest_common.sh@10 -- # set +x 00:14:03.804 ************************************ 00:14:03.804 END TEST nvmf_host_management 00:14:03.804 ************************************ 00:14:03.804 06:51:18 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:03.804 06:51:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:03.804 06:51:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:03.804 06:51:18 -- common/autotest_common.sh@10 -- # set +x 00:14:03.804 ************************************ 00:14:03.804 START TEST nvmf_lvol 00:14:03.804 ************************************ 00:14:03.804 06:51:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:04.062 * Looking for test storage... 00:14:04.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.062 06:51:18 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.062 06:51:18 -- nvmf/common.sh@7 -- # uname -s 00:14:04.062 06:51:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.062 06:51:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.062 06:51:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.062 06:51:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.062 06:51:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.062 06:51:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.062 06:51:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.062 06:51:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.062 06:51:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.062 06:51:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.062 06:51:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:04.062 06:51:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:04.062 06:51:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.062 06:51:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.062 06:51:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.062 06:51:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.062 06:51:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.062 06:51:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.062 06:51:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.062 06:51:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.062 06:51:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.062 06:51:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.062 06:51:18 -- paths/export.sh@5 -- # export PATH 00:14:04.062 06:51:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.062 06:51:18 -- nvmf/common.sh@46 -- # : 0 00:14:04.062 06:51:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:04.062 06:51:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:04.062 06:51:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:04.062 06:51:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.062 06:51:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.062 06:51:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:04.062 06:51:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:04.062 06:51:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:04.062 06:51:18 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.062 06:51:18 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:04.062 06:51:18 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:04.062 06:51:18 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:04.062 06:51:18 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:04.062 06:51:18 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:04.062 06:51:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:04.062 06:51:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.062 06:51:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:04.062 06:51:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:04.062 06:51:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:04.062 06:51:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.062 06:51:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.062 06:51:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.062 06:51:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:04.062 06:51:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:04.062 06:51:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:04.062 06:51:18 -- common/autotest_common.sh@10 -- # set +x 00:14:06.593 06:51:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:06.594 06:51:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:06.594 06:51:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:06.594 06:51:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:06.594 06:51:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:06.594 06:51:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:06.594 06:51:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:06.594 06:51:20 -- nvmf/common.sh@294 -- # net_devs=() 00:14:06.594 06:51:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:06.594 06:51:20 -- nvmf/common.sh@295 -- # e810=() 00:14:06.594 06:51:20 -- nvmf/common.sh@295 -- # local -ga e810 00:14:06.594 06:51:20 -- nvmf/common.sh@296 -- # x722=() 00:14:06.594 06:51:20 -- nvmf/common.sh@296 -- # local -ga x722 00:14:06.594 06:51:20 -- nvmf/common.sh@297 -- # mlx=() 00:14:06.594 06:51:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:06.594 06:51:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.594 06:51:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.594 06:51:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.594 06:51:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.594 06:51:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.594 06:51:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.594 06:51:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.594 06:51:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.594 06:51:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.594 06:51:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.594 06:51:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.594 06:51:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:06.594 06:51:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:06.594 06:51:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:06.594 06:51:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:06.594 06:51:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:06.594 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:06.594 06:51:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:06.594 06:51:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:06.594 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:06.594 06:51:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:06.594 06:51:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:06.594 06:51:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.594 06:51:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:06.594 06:51:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.594 06:51:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:06.594 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:06.594 06:51:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.594 06:51:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:06.594 06:51:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.594 06:51:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:06.594 06:51:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.594 06:51:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:06.594 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:06.594 06:51:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.594 06:51:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:06.594 06:51:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:06.594 06:51:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:06.594 06:51:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.594 06:51:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.594 06:51:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.594 06:51:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:06.594 06:51:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.594 06:51:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.594 06:51:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:06.594 06:51:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.594 06:51:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.594 06:51:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:06.594 06:51:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:06.594 06:51:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.594 06:51:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.594 06:51:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.594 06:51:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.594 06:51:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:06.594 06:51:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.594 06:51:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.594 06:51:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.594 06:51:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:06.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:14:06.594 00:14:06.594 --- 10.0.0.2 ping statistics --- 00:14:06.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.594 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:14:06.594 06:51:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:14:06.594 00:14:06.594 --- 10.0.0.1 ping statistics --- 00:14:06.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.594 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:14:06.594 06:51:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.594 06:51:20 -- nvmf/common.sh@410 -- # return 0 00:14:06.594 06:51:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:06.594 06:51:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.594 06:51:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:06.594 06:51:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.594 06:51:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:06.594 06:51:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:06.594 06:51:20 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:06.594 06:51:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:06.594 06:51:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:06.594 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:14:06.594 06:51:20 -- nvmf/common.sh@469 -- # nvmfpid=477386 00:14:06.594 06:51:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:06.594 06:51:20 -- nvmf/common.sh@470 -- # waitforlisten 477386 00:14:06.594 06:51:20 -- common/autotest_common.sh@819 -- # '[' -z 477386 ']' 00:14:06.594 06:51:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.594 06:51:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:06.594 06:51:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.594 06:51:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:06.594 06:51:20 -- common/autotest_common.sh@10 -- # set +x 00:14:06.594 [2024-05-15 06:51:20.700136] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:06.594 [2024-05-15 06:51:20.700215] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.594 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.594 [2024-05-15 06:51:20.784679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:06.852 [2024-05-15 06:51:20.899386] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:06.852 [2024-05-15 06:51:20.899546] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.852 [2024-05-15 06:51:20.899565] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.852 [2024-05-15 06:51:20.899586] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.852 [2024-05-15 06:51:20.899643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.852 [2024-05-15 06:51:20.899697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.852 [2024-05-15 06:51:20.899700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.417 06:51:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:07.417 06:51:21 -- common/autotest_common.sh@852 -- # return 0 00:14:07.417 06:51:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:07.417 06:51:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:07.417 06:51:21 -- common/autotest_common.sh@10 -- # set +x 00:14:07.675 06:51:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.675 06:51:21 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:07.675 [2024-05-15 06:51:21.879754] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.675 06:51:21 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.933 06:51:22 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:07.933 06:51:22 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:08.498 06:51:22 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:08.498 06:51:22 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:08.498 06:51:22 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:08.756 06:51:22 -- target/nvmf_lvol.sh@29 -- # lvs=e8939790-9cb7-423a-8d9e-ba9f00ce38c5 00:14:08.756 06:51:22 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e8939790-9cb7-423a-8d9e-ba9f00ce38c5 lvol 20 00:14:09.014 06:51:23 -- target/nvmf_lvol.sh@32 -- # lvol=042722fa-aef6-4489-b821-38a7846e26c8 00:14:09.014 06:51:23 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:09.271 06:51:23 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 042722fa-aef6-4489-b821-38a7846e26c8 00:14:09.528 06:51:23 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:09.786 [2024-05-15 06:51:23.887810] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.786 06:51:23 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:10.042 06:51:24 -- target/nvmf_lvol.sh@42 -- # perf_pid=477837 00:14:10.043 06:51:24 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:10.043 06:51:24 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:10.043 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.975 06:51:25 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 042722fa-aef6-4489-b821-38a7846e26c8 MY_SNAPSHOT 00:14:11.233 06:51:25 -- target/nvmf_lvol.sh@47 -- # snapshot=c36b6ae2-e7eb-4bc5-8a67-3f3db4a3e7c1 00:14:11.233 06:51:25 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 042722fa-aef6-4489-b821-38a7846e26c8 30 00:14:11.491 06:51:25 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c36b6ae2-e7eb-4bc5-8a67-3f3db4a3e7c1 MY_CLONE 00:14:11.750 06:51:25 -- target/nvmf_lvol.sh@49 -- # clone=351740fa-df53-419f-89af-0972acb1f6ce 00:14:11.750 06:51:25 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 351740fa-df53-419f-89af-0972acb1f6ce 00:14:12.008 06:51:26 -- target/nvmf_lvol.sh@53 -- # wait 477837 00:14:21.980 Initializing NVMe Controllers 00:14:21.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:21.980 Controller IO queue size 128, less than required. 00:14:21.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:21.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:21.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:21.980 Initialization complete. Launching workers. 00:14:21.980 ======================================================== 00:14:21.980 Latency(us) 00:14:21.980 Device Information : IOPS MiB/s Average min max 00:14:21.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11163.07 43.61 11467.39 431.10 61390.94 00:14:21.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10416.18 40.69 12294.70 1600.36 61664.60 00:14:21.980 ======================================================== 00:14:21.980 Total : 21579.25 84.29 11866.72 431.10 61664.60 00:14:21.980 00:14:21.980 06:51:34 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:21.980 06:51:34 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 042722fa-aef6-4489-b821-38a7846e26c8 00:14:21.980 06:51:35 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e8939790-9cb7-423a-8d9e-ba9f00ce38c5 00:14:21.980 06:51:35 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:21.980 06:51:35 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:21.980 06:51:35 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:21.980 06:51:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:21.980 06:51:35 -- nvmf/common.sh@116 -- # sync 00:14:21.980 06:51:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:21.980 06:51:35 -- nvmf/common.sh@119 -- # set +e 00:14:21.980 06:51:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:21.980 06:51:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:21.980 rmmod nvme_tcp 00:14:21.980 rmmod nvme_fabrics 00:14:21.980 rmmod nvme_keyring 00:14:21.980 06:51:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:21.980 06:51:35 -- nvmf/common.sh@123 -- # set -e 00:14:21.980 06:51:35 -- nvmf/common.sh@124 -- # return 0 00:14:21.980 06:51:35 -- nvmf/common.sh@477 -- # '[' -n 477386 ']' 00:14:21.980 06:51:35 -- nvmf/common.sh@478 -- # killprocess 477386 00:14:21.980 06:51:35 -- common/autotest_common.sh@926 -- # '[' -z 477386 ']' 00:14:21.980 06:51:35 -- common/autotest_common.sh@930 -- # kill -0 477386 00:14:21.980 06:51:35 -- common/autotest_common.sh@931 -- # uname 00:14:21.980 06:51:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:21.980 06:51:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 477386 00:14:21.980 06:51:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:21.980 06:51:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:21.980 06:51:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 477386' 00:14:21.980 killing process with pid 477386 00:14:21.980 06:51:35 -- common/autotest_common.sh@945 -- # kill 477386 00:14:21.981 06:51:35 -- common/autotest_common.sh@950 -- # wait 477386 00:14:21.981 06:51:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:21.981 06:51:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:21.981 06:51:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:21.981 06:51:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.981 06:51:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:21.981 06:51:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.981 06:51:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.981 06:51:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.885 06:51:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:23.885 00:14:23.885 real 0m19.813s 00:14:23.885 user 1m2.198s 00:14:23.885 sys 0m7.354s 00:14:23.885 06:51:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.885 06:51:37 -- common/autotest_common.sh@10 -- # set +x 00:14:23.885 ************************************ 00:14:23.885 END TEST nvmf_lvol 00:14:23.885 ************************************ 00:14:23.885 06:51:37 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:23.885 06:51:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:23.885 06:51:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:23.885 06:51:37 -- common/autotest_common.sh@10 -- # set +x 00:14:23.885 ************************************ 00:14:23.885 START TEST nvmf_lvs_grow 00:14:23.885 ************************************ 00:14:23.885 06:51:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:23.885 * Looking for test storage... 00:14:23.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.885 06:51:37 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.886 06:51:37 -- nvmf/common.sh@7 -- # uname -s 00:14:23.886 06:51:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.886 06:51:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.886 06:51:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.886 06:51:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.886 06:51:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.886 06:51:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.886 06:51:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.886 06:51:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.886 06:51:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.886 06:51:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.886 06:51:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.886 06:51:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.886 06:51:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.886 06:51:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.886 06:51:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.886 06:51:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.886 06:51:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.886 06:51:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.886 06:51:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.886 06:51:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.886 06:51:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.886 06:51:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.886 06:51:37 -- paths/export.sh@5 -- # export PATH 00:14:23.886 06:51:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.886 06:51:37 -- nvmf/common.sh@46 -- # : 0 00:14:23.886 06:51:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:23.886 06:51:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:23.886 06:51:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:23.886 06:51:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.886 06:51:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.886 06:51:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:23.886 06:51:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:23.886 06:51:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:23.886 06:51:37 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:23.886 06:51:37 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:23.886 06:51:37 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:23.886 06:51:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:23.886 06:51:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.886 06:51:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:23.886 06:51:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:23.886 06:51:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:23.886 06:51:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.886 06:51:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.886 06:51:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.886 06:51:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:23.886 06:51:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:23.886 06:51:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:23.886 06:51:37 -- common/autotest_common.sh@10 -- # set +x 00:14:26.414 06:51:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:26.414 06:51:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:26.414 06:51:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:26.414 06:51:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:26.414 06:51:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:26.414 06:51:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:26.414 06:51:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:26.414 06:51:40 -- nvmf/common.sh@294 -- # net_devs=() 00:14:26.414 06:51:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:26.414 06:51:40 -- nvmf/common.sh@295 -- # e810=() 00:14:26.414 06:51:40 -- nvmf/common.sh@295 -- # local -ga e810 00:14:26.414 06:51:40 -- nvmf/common.sh@296 -- # x722=() 00:14:26.414 06:51:40 -- nvmf/common.sh@296 -- # local -ga x722 00:14:26.414 06:51:40 -- nvmf/common.sh@297 -- # mlx=() 00:14:26.414 06:51:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:26.414 06:51:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.414 06:51:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.414 06:51:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.414 06:51:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.414 06:51:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.414 06:51:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.415 06:51:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.415 06:51:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.415 06:51:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.415 06:51:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.415 06:51:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.415 06:51:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:26.415 06:51:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:26.415 06:51:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:26.415 06:51:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:26.415 06:51:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:26.415 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:26.415 06:51:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:26.415 06:51:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:26.415 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:26.415 06:51:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:26.415 06:51:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:26.415 06:51:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.415 06:51:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:26.415 06:51:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.415 06:51:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:26.415 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:26.415 06:51:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.415 06:51:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:26.415 06:51:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.415 06:51:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:26.415 06:51:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.415 06:51:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:26.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:26.415 06:51:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.415 06:51:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:26.415 06:51:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:26.415 06:51:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:26.415 06:51:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.415 06:51:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.415 06:51:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.415 06:51:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:26.415 06:51:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.415 06:51:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.415 06:51:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:26.415 06:51:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.415 06:51:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.415 06:51:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:26.415 06:51:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:26.415 06:51:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.415 06:51:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.415 06:51:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.415 06:51:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.415 06:51:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:26.415 06:51:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.415 06:51:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.415 06:51:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.415 06:51:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:26.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:14:26.415 00:14:26.415 --- 10.0.0.2 ping statistics --- 00:14:26.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.415 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:14:26.415 06:51:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:14:26.415 00:14:26.415 --- 10.0.0.1 ping statistics --- 00:14:26.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.415 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:14:26.415 06:51:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.415 06:51:40 -- nvmf/common.sh@410 -- # return 0 00:14:26.415 06:51:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:26.415 06:51:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.415 06:51:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:26.415 06:51:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.415 06:51:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:26.415 06:51:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:26.415 06:51:40 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:26.415 06:51:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:26.415 06:51:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:26.415 06:51:40 -- common/autotest_common.sh@10 -- # set +x 00:14:26.415 06:51:40 -- nvmf/common.sh@469 -- # nvmfpid=481442 00:14:26.415 06:51:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:26.415 06:51:40 -- nvmf/common.sh@470 -- # waitforlisten 481442 00:14:26.415 06:51:40 -- common/autotest_common.sh@819 -- # '[' -z 481442 ']' 00:14:26.415 06:51:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.415 06:51:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:26.415 06:51:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.415 06:51:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:26.415 06:51:40 -- common/autotest_common.sh@10 -- # set +x 00:14:26.415 [2024-05-15 06:51:40.540112] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:26.415 [2024-05-15 06:51:40.540188] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.415 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.415 [2024-05-15 06:51:40.620871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.673 [2024-05-15 06:51:40.737556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:26.673 [2024-05-15 06:51:40.737722] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.673 [2024-05-15 06:51:40.737741] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.673 [2024-05-15 06:51:40.737756] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.673 [2024-05-15 06:51:40.737787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.608 06:51:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:27.608 06:51:41 -- common/autotest_common.sh@852 -- # return 0 00:14:27.608 06:51:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:27.608 06:51:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:27.608 06:51:41 -- common/autotest_common.sh@10 -- # set +x 00:14:27.608 06:51:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.608 06:51:41 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:27.608 [2024-05-15 06:51:41.718204] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.608 06:51:41 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:27.608 06:51:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:27.608 06:51:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:27.608 06:51:41 -- common/autotest_common.sh@10 -- # set +x 00:14:27.608 ************************************ 00:14:27.608 START TEST lvs_grow_clean 00:14:27.608 ************************************ 00:14:27.608 06:51:41 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:27.608 06:51:41 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:27.608 06:51:41 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:27.608 06:51:41 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:27.608 06:51:41 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:27.608 06:51:41 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:27.608 06:51:41 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:27.608 06:51:41 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:27.608 06:51:41 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:27.608 06:51:41 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:27.866 06:51:42 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:27.866 06:51:42 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:28.124 06:51:42 -- target/nvmf_lvs_grow.sh@28 -- # lvs=fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:28.124 06:51:42 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:28.124 06:51:42 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:28.382 06:51:42 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:28.382 06:51:42 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:28.382 06:51:42 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fa24cd1d-3819-4266-b234-98ff18e770cd lvol 150 00:14:28.640 06:51:42 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9a5b9dc3-9335-4968-a980-ac3b5698777c 00:14:28.640 06:51:42 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:28.640 06:51:42 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:28.898 [2024-05-15 06:51:42.952036] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:28.898 [2024-05-15 06:51:42.952133] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:28.898 true 00:14:28.898 06:51:42 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:28.898 06:51:42 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:29.183 06:51:43 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:29.183 06:51:43 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:29.442 06:51:43 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9a5b9dc3-9335-4968-a980-ac3b5698777c 00:14:29.700 06:51:43 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:29.957 [2024-05-15 06:51:43.951149] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.957 06:51:43 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:30.215 06:51:44 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=482020 00:14:30.215 06:51:44 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:30.215 06:51:44 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:30.215 06:51:44 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 482020 /var/tmp/bdevperf.sock 00:14:30.215 06:51:44 -- common/autotest_common.sh@819 -- # '[' -z 482020 ']' 00:14:30.215 06:51:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:30.215 06:51:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:30.215 06:51:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:30.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:30.215 06:51:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:30.215 06:51:44 -- common/autotest_common.sh@10 -- # set +x 00:14:30.215 [2024-05-15 06:51:44.242857] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:30.215 [2024-05-15 06:51:44.242955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482020 ] 00:14:30.215 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.215 [2024-05-15 06:51:44.315015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.215 [2024-05-15 06:51:44.429925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.148 06:51:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:31.148 06:51:45 -- common/autotest_common.sh@852 -- # return 0 00:14:31.148 06:51:45 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:31.406 Nvme0n1 00:14:31.406 06:51:45 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:31.664 [ 00:14:31.664 { 00:14:31.664 "name": "Nvme0n1", 00:14:31.664 "aliases": [ 00:14:31.664 "9a5b9dc3-9335-4968-a980-ac3b5698777c" 00:14:31.664 ], 00:14:31.664 "product_name": "NVMe disk", 00:14:31.664 "block_size": 4096, 00:14:31.664 "num_blocks": 38912, 00:14:31.664 "uuid": "9a5b9dc3-9335-4968-a980-ac3b5698777c", 00:14:31.664 "assigned_rate_limits": { 00:14:31.664 "rw_ios_per_sec": 0, 00:14:31.664 "rw_mbytes_per_sec": 0, 00:14:31.664 "r_mbytes_per_sec": 0, 00:14:31.664 "w_mbytes_per_sec": 0 00:14:31.664 }, 00:14:31.664 "claimed": false, 00:14:31.664 "zoned": false, 00:14:31.664 "supported_io_types": { 00:14:31.664 "read": true, 00:14:31.664 "write": true, 00:14:31.664 "unmap": true, 00:14:31.664 "write_zeroes": true, 00:14:31.664 "flush": true, 00:14:31.664 "reset": true, 00:14:31.664 "compare": true, 00:14:31.664 "compare_and_write": true, 00:14:31.664 "abort": true, 00:14:31.664 "nvme_admin": true, 00:14:31.664 "nvme_io": true 00:14:31.664 }, 00:14:31.664 "driver_specific": { 00:14:31.664 "nvme": [ 00:14:31.664 { 00:14:31.664 "trid": { 00:14:31.664 "trtype": "TCP", 00:14:31.664 "adrfam": "IPv4", 00:14:31.664 "traddr": "10.0.0.2", 00:14:31.664 "trsvcid": "4420", 00:14:31.664 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:31.664 }, 00:14:31.665 "ctrlr_data": { 00:14:31.665 "cntlid": 1, 00:14:31.665 "vendor_id": "0x8086", 00:14:31.665 "model_number": "SPDK bdev Controller", 00:14:31.665 "serial_number": "SPDK0", 00:14:31.665 "firmware_revision": "24.01.1", 00:14:31.665 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:31.665 "oacs": { 00:14:31.665 "security": 0, 00:14:31.665 "format": 0, 00:14:31.665 "firmware": 0, 00:14:31.665 "ns_manage": 0 00:14:31.665 }, 00:14:31.665 "multi_ctrlr": true, 00:14:31.665 "ana_reporting": false 00:14:31.665 }, 00:14:31.665 "vs": { 00:14:31.665 "nvme_version": "1.3" 00:14:31.665 }, 00:14:31.665 "ns_data": { 00:14:31.665 "id": 1, 00:14:31.665 "can_share": true 00:14:31.665 } 00:14:31.665 } 00:14:31.665 ], 00:14:31.665 "mp_policy": "active_passive" 00:14:31.665 } 00:14:31.665 } 00:14:31.665 ] 00:14:31.665 06:51:45 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=482170 00:14:31.665 06:51:45 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:31.665 06:51:45 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:31.923 Running I/O for 10 seconds... 00:14:32.855 Latency(us) 00:14:32.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.855 Nvme0n1 : 1.00 14466.00 56.51 0.00 0.00 0.00 0.00 0.00 00:14:32.855 =================================================================================================================== 00:14:32.855 Total : 14466.00 56.51 0.00 0.00 0.00 0.00 0.00 00:14:32.855 00:14:33.790 06:51:47 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:33.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.790 Nvme0n1 : 2.00 14657.00 57.25 0.00 0.00 0.00 0.00 0.00 00:14:33.790 =================================================================================================================== 00:14:33.790 Total : 14657.00 57.25 0.00 0.00 0.00 0.00 0.00 00:14:33.790 00:14:34.049 true 00:14:34.049 06:51:48 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:34.049 06:51:48 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:34.307 06:51:48 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:34.307 06:51:48 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:34.307 06:51:48 -- target/nvmf_lvs_grow.sh@65 -- # wait 482170 00:14:34.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.873 Nvme0n1 : 3.00 14741.67 57.58 0.00 0.00 0.00 0.00 0.00 00:14:34.873 =================================================================================================================== 00:14:34.873 Total : 14741.67 57.58 0.00 0.00 0.00 0.00 0.00 00:14:34.873 00:14:35.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.808 Nvme0n1 : 4.00 14864.25 58.06 0.00 0.00 0.00 0.00 0.00 00:14:35.808 =================================================================================================================== 00:14:35.808 Total : 14864.25 58.06 0.00 0.00 0.00 0.00 0.00 00:14:35.808 00:14:37.181 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.181 Nvme0n1 : 5.00 14963.40 58.45 0.00 0.00 0.00 0.00 0.00 00:14:37.181 =================================================================================================================== 00:14:37.181 Total : 14963.40 58.45 0.00 0.00 0.00 0.00 0.00 00:14:37.181 00:14:37.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.747 Nvme0n1 : 6.00 15008.17 58.63 0.00 0.00 0.00 0.00 0.00 00:14:37.747 =================================================================================================================== 00:14:37.747 Total : 15008.17 58.63 0.00 0.00 0.00 0.00 0.00 00:14:37.747 00:14:39.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.130 Nvme0n1 : 7.00 15040.14 58.75 0.00 0.00 0.00 0.00 0.00 00:14:39.130 =================================================================================================================== 00:14:39.130 Total : 15040.14 58.75 0.00 0.00 0.00 0.00 0.00 00:14:39.130 00:14:40.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.064 Nvme0n1 : 8.00 15073.88 58.88 0.00 0.00 0.00 0.00 0.00 00:14:40.064 =================================================================================================================== 00:14:40.064 Total : 15073.88 58.88 0.00 0.00 0.00 0.00 0.00 00:14:40.064 00:14:40.999 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.999 Nvme0n1 : 9.00 15097.00 58.97 0.00 0.00 0.00 0.00 0.00 00:14:40.999 =================================================================================================================== 00:14:40.999 Total : 15097.00 58.97 0.00 0.00 0.00 0.00 0.00 00:14:40.999 00:14:41.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.934 Nvme0n1 : 10.00 15120.00 59.06 0.00 0.00 0.00 0.00 0.00 00:14:41.934 =================================================================================================================== 00:14:41.934 Total : 15120.00 59.06 0.00 0.00 0.00 0.00 0.00 00:14:41.934 00:14:41.934 00:14:41.934 Latency(us) 00:14:41.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.934 Nvme0n1 : 10.01 15123.13 59.07 0.00 0.00 8457.70 5024.43 15146.10 00:14:41.934 =================================================================================================================== 00:14:41.934 Total : 15123.13 59.07 0.00 0.00 8457.70 5024.43 15146.10 00:14:41.934 0 00:14:41.934 06:51:56 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 482020 00:14:41.934 06:51:56 -- common/autotest_common.sh@926 -- # '[' -z 482020 ']' 00:14:41.934 06:51:56 -- common/autotest_common.sh@930 -- # kill -0 482020 00:14:41.934 06:51:56 -- common/autotest_common.sh@931 -- # uname 00:14:41.934 06:51:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:41.934 06:51:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 482020 00:14:41.934 06:51:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:41.934 06:51:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:41.934 06:51:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 482020' 00:14:41.934 killing process with pid 482020 00:14:41.934 06:51:56 -- common/autotest_common.sh@945 -- # kill 482020 00:14:41.934 Received shutdown signal, test time was about 10.000000 seconds 00:14:41.934 00:14:41.934 Latency(us) 00:14:41.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.934 =================================================================================================================== 00:14:41.934 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:41.934 06:51:56 -- common/autotest_common.sh@950 -- # wait 482020 00:14:42.192 06:51:56 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:42.450 06:51:56 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:42.450 06:51:56 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:42.709 06:51:56 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:42.709 06:51:56 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:42.709 06:51:56 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:43.002 [2024-05-15 06:51:57.067729] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:43.002 06:51:57 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:43.002 06:51:57 -- common/autotest_common.sh@640 -- # local es=0 00:14:43.002 06:51:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:43.002 06:51:57 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.002 06:51:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:43.002 06:51:57 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.002 06:51:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:43.002 06:51:57 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.002 06:51:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:43.002 06:51:57 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.002 06:51:57 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:43.002 06:51:57 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:43.261 request: 00:14:43.261 { 00:14:43.261 "uuid": "fa24cd1d-3819-4266-b234-98ff18e770cd", 00:14:43.261 "method": "bdev_lvol_get_lvstores", 00:14:43.261 "req_id": 1 00:14:43.261 } 00:14:43.261 Got JSON-RPC error response 00:14:43.261 response: 00:14:43.261 { 00:14:43.261 "code": -19, 00:14:43.261 "message": "No such device" 00:14:43.261 } 00:14:43.261 06:51:57 -- common/autotest_common.sh@643 -- # es=1 00:14:43.261 06:51:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:43.261 06:51:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:43.261 06:51:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:43.261 06:51:57 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:43.519 aio_bdev 00:14:43.519 06:51:57 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9a5b9dc3-9335-4968-a980-ac3b5698777c 00:14:43.519 06:51:57 -- common/autotest_common.sh@887 -- # local bdev_name=9a5b9dc3-9335-4968-a980-ac3b5698777c 00:14:43.519 06:51:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:43.519 06:51:57 -- common/autotest_common.sh@889 -- # local i 00:14:43.519 06:51:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:43.519 06:51:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:43.519 06:51:57 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:43.777 06:51:57 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9a5b9dc3-9335-4968-a980-ac3b5698777c -t 2000 00:14:44.035 [ 00:14:44.035 { 00:14:44.035 "name": "9a5b9dc3-9335-4968-a980-ac3b5698777c", 00:14:44.035 "aliases": [ 00:14:44.035 "lvs/lvol" 00:14:44.035 ], 00:14:44.035 "product_name": "Logical Volume", 00:14:44.035 "block_size": 4096, 00:14:44.035 "num_blocks": 38912, 00:14:44.035 "uuid": "9a5b9dc3-9335-4968-a980-ac3b5698777c", 00:14:44.035 "assigned_rate_limits": { 00:14:44.035 "rw_ios_per_sec": 0, 00:14:44.035 "rw_mbytes_per_sec": 0, 00:14:44.035 "r_mbytes_per_sec": 0, 00:14:44.035 "w_mbytes_per_sec": 0 00:14:44.035 }, 00:14:44.035 "claimed": false, 00:14:44.035 "zoned": false, 00:14:44.035 "supported_io_types": { 00:14:44.035 "read": true, 00:14:44.035 "write": true, 00:14:44.035 "unmap": true, 00:14:44.035 "write_zeroes": true, 00:14:44.035 "flush": false, 00:14:44.035 "reset": true, 00:14:44.035 "compare": false, 00:14:44.035 "compare_and_write": false, 00:14:44.035 "abort": false, 00:14:44.035 "nvme_admin": false, 00:14:44.035 "nvme_io": false 00:14:44.035 }, 00:14:44.035 "driver_specific": { 00:14:44.035 "lvol": { 00:14:44.035 "lvol_store_uuid": "fa24cd1d-3819-4266-b234-98ff18e770cd", 00:14:44.035 "base_bdev": "aio_bdev", 00:14:44.035 "thin_provision": false, 00:14:44.035 "snapshot": false, 00:14:44.035 "clone": false, 00:14:44.035 "esnap_clone": false 00:14:44.035 } 00:14:44.035 } 00:14:44.035 } 00:14:44.035 ] 00:14:44.036 06:51:58 -- common/autotest_common.sh@895 -- # return 0 00:14:44.036 06:51:58 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:44.036 06:51:58 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:44.294 06:51:58 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:44.294 06:51:58 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:44.294 06:51:58 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:44.552 06:51:58 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:44.552 06:51:58 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9a5b9dc3-9335-4968-a980-ac3b5698777c 00:14:44.812 06:51:58 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa24cd1d-3819-4266-b234-98ff18e770cd 00:14:45.070 06:51:59 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:45.070 06:51:59 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.328 00:14:45.328 real 0m17.587s 00:14:45.328 user 0m17.234s 00:14:45.328 sys 0m1.806s 00:14:45.328 06:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:45.328 06:51:59 -- common/autotest_common.sh@10 -- # set +x 00:14:45.328 ************************************ 00:14:45.328 END TEST lvs_grow_clean 00:14:45.328 ************************************ 00:14:45.328 06:51:59 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:45.328 06:51:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:45.328 06:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:45.328 06:51:59 -- common/autotest_common.sh@10 -- # set +x 00:14:45.328 ************************************ 00:14:45.328 START TEST lvs_grow_dirty 00:14:45.328 ************************************ 00:14:45.329 06:51:59 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:45.329 06:51:59 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:45.329 06:51:59 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:45.329 06:51:59 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:45.329 06:51:59 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:45.329 06:51:59 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:45.329 06:51:59 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:45.329 06:51:59 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.329 06:51:59 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:45.329 06:51:59 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:45.587 06:51:59 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:45.587 06:51:59 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:45.845 06:51:59 -- target/nvmf_lvs_grow.sh@28 -- # lvs=fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:14:45.845 06:51:59 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:14:45.845 06:51:59 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:46.103 06:52:00 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:46.103 06:52:00 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:46.103 06:52:00 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 lvol 150 00:14:46.361 06:52:00 -- target/nvmf_lvs_grow.sh@33 -- # lvol=cda55e0e-c430-4926-bf9d-b0c1827726b4 00:14:46.361 06:52:00 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:46.361 06:52:00 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:46.361 [2024-05-15 06:52:00.577162] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:46.361 [2024-05-15 06:52:00.577292] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:46.361 true 00:14:46.361 06:52:00 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:14:46.361 06:52:00 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:46.618 06:52:00 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:46.618 06:52:00 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:47.183 06:52:01 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cda55e0e-c430-4926-bf9d-b0c1827726b4 00:14:47.183 06:52:01 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:47.441 06:52:01 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:47.699 06:52:01 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=484138 00:14:47.699 06:52:01 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:47.699 06:52:01 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:47.699 06:52:01 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 484138 /var/tmp/bdevperf.sock 00:14:47.699 06:52:01 -- common/autotest_common.sh@819 -- # '[' -z 484138 ']' 00:14:47.699 06:52:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.699 06:52:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:47.699 06:52:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.699 06:52:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:47.699 06:52:01 -- common/autotest_common.sh@10 -- # set +x 00:14:47.699 [2024-05-15 06:52:01.893616] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:47.699 [2024-05-15 06:52:01.893697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484138 ] 00:14:47.699 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.957 [2024-05-15 06:52:01.970022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.957 [2024-05-15 06:52:02.083771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.889 06:52:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:48.889 06:52:02 -- common/autotest_common.sh@852 -- # return 0 00:14:48.889 06:52:02 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:49.145 Nvme0n1 00:14:49.145 06:52:03 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:49.402 [ 00:14:49.402 { 00:14:49.402 "name": "Nvme0n1", 00:14:49.402 "aliases": [ 00:14:49.402 "cda55e0e-c430-4926-bf9d-b0c1827726b4" 00:14:49.402 ], 00:14:49.402 "product_name": "NVMe disk", 00:14:49.402 "block_size": 4096, 00:14:49.402 "num_blocks": 38912, 00:14:49.402 "uuid": "cda55e0e-c430-4926-bf9d-b0c1827726b4", 00:14:49.402 "assigned_rate_limits": { 00:14:49.402 "rw_ios_per_sec": 0, 00:14:49.402 "rw_mbytes_per_sec": 0, 00:14:49.402 "r_mbytes_per_sec": 0, 00:14:49.402 "w_mbytes_per_sec": 0 00:14:49.402 }, 00:14:49.402 "claimed": false, 00:14:49.402 "zoned": false, 00:14:49.402 "supported_io_types": { 00:14:49.402 "read": true, 00:14:49.402 "write": true, 00:14:49.402 "unmap": true, 00:14:49.402 "write_zeroes": true, 00:14:49.402 "flush": true, 00:14:49.402 "reset": true, 00:14:49.402 "compare": true, 00:14:49.402 "compare_and_write": true, 00:14:49.402 "abort": true, 00:14:49.402 "nvme_admin": true, 00:14:49.402 "nvme_io": true 00:14:49.402 }, 00:14:49.402 "driver_specific": { 00:14:49.402 "nvme": [ 00:14:49.402 { 00:14:49.402 "trid": { 00:14:49.402 "trtype": "TCP", 00:14:49.402 "adrfam": "IPv4", 00:14:49.402 "traddr": "10.0.0.2", 00:14:49.402 "trsvcid": "4420", 00:14:49.402 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:49.402 }, 00:14:49.402 "ctrlr_data": { 00:14:49.402 "cntlid": 1, 00:14:49.402 "vendor_id": "0x8086", 00:14:49.402 "model_number": "SPDK bdev Controller", 00:14:49.402 "serial_number": "SPDK0", 00:14:49.402 "firmware_revision": "24.01.1", 00:14:49.402 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:49.402 "oacs": { 00:14:49.402 "security": 0, 00:14:49.402 "format": 0, 00:14:49.402 "firmware": 0, 00:14:49.402 "ns_manage": 0 00:14:49.402 }, 00:14:49.402 "multi_ctrlr": true, 00:14:49.402 "ana_reporting": false 00:14:49.402 }, 00:14:49.402 "vs": { 00:14:49.402 "nvme_version": "1.3" 00:14:49.402 }, 00:14:49.402 "ns_data": { 00:14:49.402 "id": 1, 00:14:49.402 "can_share": true 00:14:49.402 } 00:14:49.402 } 00:14:49.402 ], 00:14:49.402 "mp_policy": "active_passive" 00:14:49.402 } 00:14:49.402 } 00:14:49.402 ] 00:14:49.402 06:52:03 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=484405 00:14:49.402 06:52:03 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.402 06:52:03 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:49.402 Running I/O for 10 seconds... 00:14:50.336 Latency(us) 00:14:50.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.337 Nvme0n1 : 1.00 14147.00 55.26 0.00 0.00 0.00 0.00 0.00 00:14:50.337 =================================================================================================================== 00:14:50.337 Total : 14147.00 55.26 0.00 0.00 0.00 0.00 0.00 00:14:50.337 00:14:51.268 06:52:05 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:14:51.529 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.529 Nvme0n1 : 2.00 14305.50 55.88 0.00 0.00 0.00 0.00 0.00 00:14:51.529 =================================================================================================================== 00:14:51.529 Total : 14305.50 55.88 0.00 0.00 0.00 0.00 0.00 00:14:51.529 00:14:51.529 true 00:14:51.529 06:52:05 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:14:51.529 06:52:05 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:51.789 06:52:05 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:51.789 06:52:05 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:51.789 06:52:05 -- target/nvmf_lvs_grow.sh@65 -- # wait 484405 00:14:52.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.355 Nvme0n1 : 3.00 14488.33 56.60 0.00 0.00 0.00 0.00 0.00 00:14:52.355 =================================================================================================================== 00:14:52.355 Total : 14488.33 56.60 0.00 0.00 0.00 0.00 0.00 00:14:52.355 00:14:53.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.732 Nvme0n1 : 4.00 14544.75 56.82 0.00 0.00 0.00 0.00 0.00 00:14:53.732 =================================================================================================================== 00:14:53.732 Total : 14544.75 56.82 0.00 0.00 0.00 0.00 0.00 00:14:53.732 00:14:54.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.668 Nvme0n1 : 5.00 14631.00 57.15 0.00 0.00 0.00 0.00 0.00 00:14:54.668 =================================================================================================================== 00:14:54.668 Total : 14631.00 57.15 0.00 0.00 0.00 0.00 0.00 00:14:54.668 00:14:55.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.602 Nvme0n1 : 6.00 14660.33 57.27 0.00 0.00 0.00 0.00 0.00 00:14:55.603 =================================================================================================================== 00:14:55.603 Total : 14660.33 57.27 0.00 0.00 0.00 0.00 0.00 00:14:55.603 00:14:56.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.538 Nvme0n1 : 7.00 14684.71 57.36 0.00 0.00 0.00 0.00 0.00 00:14:56.538 =================================================================================================================== 00:14:56.538 Total : 14684.71 57.36 0.00 0.00 0.00 0.00 0.00 00:14:56.538 00:14:57.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.483 Nvme0n1 : 8.00 14705.12 57.44 0.00 0.00 0.00 0.00 0.00 00:14:57.483 =================================================================================================================== 00:14:57.483 Total : 14705.12 57.44 0.00 0.00 0.00 0.00 0.00 00:14:57.483 00:14:58.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.420 Nvme0n1 : 9.00 14721.00 57.50 0.00 0.00 0.00 0.00 0.00 00:14:58.420 =================================================================================================================== 00:14:58.420 Total : 14721.00 57.50 0.00 0.00 0.00 0.00 0.00 00:14:58.420 00:14:59.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.356 Nvme0n1 : 10.00 14739.50 57.58 0.00 0.00 0.00 0.00 0.00 00:14:59.356 =================================================================================================================== 00:14:59.356 Total : 14739.50 57.58 0.00 0.00 0.00 0.00 0.00 00:14:59.356 00:14:59.356 00:14:59.356 Latency(us) 00:14:59.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.356 Nvme0n1 : 10.01 14740.58 57.58 0.00 0.00 8677.43 2949.12 12815.93 00:14:59.356 =================================================================================================================== 00:14:59.356 Total : 14740.58 57.58 0.00 0.00 8677.43 2949.12 12815.93 00:14:59.356 0 00:14:59.356 06:52:13 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 484138 00:14:59.356 06:52:13 -- common/autotest_common.sh@926 -- # '[' -z 484138 ']' 00:14:59.356 06:52:13 -- common/autotest_common.sh@930 -- # kill -0 484138 00:14:59.356 06:52:13 -- common/autotest_common.sh@931 -- # uname 00:14:59.356 06:52:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:59.356 06:52:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 484138 00:14:59.614 06:52:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:59.614 06:52:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:59.614 06:52:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 484138' 00:14:59.614 killing process with pid 484138 00:14:59.614 06:52:13 -- common/autotest_common.sh@945 -- # kill 484138 00:14:59.614 Received shutdown signal, test time was about 10.000000 seconds 00:14:59.614 00:14:59.614 Latency(us) 00:14:59.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.614 =================================================================================================================== 00:14:59.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.614 06:52:13 -- common/autotest_common.sh@950 -- # wait 484138 00:14:59.872 06:52:13 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:59.872 06:52:14 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:14:59.872 06:52:14 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:00.132 06:52:14 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:00.132 06:52:14 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:00.132 06:52:14 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 481442 00:15:00.132 06:52:14 -- target/nvmf_lvs_grow.sh@74 -- # wait 481442 00:15:00.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 481442 Killed "${NVMF_APP[@]}" "$@" 00:15:00.392 06:52:14 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:00.392 06:52:14 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:00.392 06:52:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:00.392 06:52:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:00.392 06:52:14 -- common/autotest_common.sh@10 -- # set +x 00:15:00.392 06:52:14 -- nvmf/common.sh@469 -- # nvmfpid=485645 00:15:00.392 06:52:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:00.392 06:52:14 -- nvmf/common.sh@470 -- # waitforlisten 485645 00:15:00.392 06:52:14 -- common/autotest_common.sh@819 -- # '[' -z 485645 ']' 00:15:00.392 06:52:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.392 06:52:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:00.392 06:52:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.392 06:52:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:00.392 06:52:14 -- common/autotest_common.sh@10 -- # set +x 00:15:00.392 [2024-05-15 06:52:14.416514] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:00.392 [2024-05-15 06:52:14.416604] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.392 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.392 [2024-05-15 06:52:14.495293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.392 [2024-05-15 06:52:14.606351] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:00.392 [2024-05-15 06:52:14.606514] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.392 [2024-05-15 06:52:14.606532] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.392 [2024-05-15 06:52:14.606545] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.392 [2024-05-15 06:52:14.606572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.328 06:52:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:01.328 06:52:15 -- common/autotest_common.sh@852 -- # return 0 00:15:01.328 06:52:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:01.328 06:52:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:01.328 06:52:15 -- common/autotest_common.sh@10 -- # set +x 00:15:01.328 06:52:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.328 06:52:15 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:01.587 [2024-05-15 06:52:15.626771] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:01.587 [2024-05-15 06:52:15.626903] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:01.587 [2024-05-15 06:52:15.626974] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:01.587 06:52:15 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:01.587 06:52:15 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev cda55e0e-c430-4926-bf9d-b0c1827726b4 00:15:01.587 06:52:15 -- common/autotest_common.sh@887 -- # local bdev_name=cda55e0e-c430-4926-bf9d-b0c1827726b4 00:15:01.587 06:52:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:01.587 06:52:15 -- common/autotest_common.sh@889 -- # local i 00:15:01.587 06:52:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:01.587 06:52:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:01.587 06:52:15 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:01.846 06:52:15 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cda55e0e-c430-4926-bf9d-b0c1827726b4 -t 2000 00:15:02.104 [ 00:15:02.104 { 00:15:02.104 "name": "cda55e0e-c430-4926-bf9d-b0c1827726b4", 00:15:02.104 "aliases": [ 00:15:02.104 "lvs/lvol" 00:15:02.104 ], 00:15:02.104 "product_name": "Logical Volume", 00:15:02.104 "block_size": 4096, 00:15:02.104 "num_blocks": 38912, 00:15:02.104 "uuid": "cda55e0e-c430-4926-bf9d-b0c1827726b4", 00:15:02.104 "assigned_rate_limits": { 00:15:02.104 "rw_ios_per_sec": 0, 00:15:02.104 "rw_mbytes_per_sec": 0, 00:15:02.104 "r_mbytes_per_sec": 0, 00:15:02.104 "w_mbytes_per_sec": 0 00:15:02.104 }, 00:15:02.104 "claimed": false, 00:15:02.104 "zoned": false, 00:15:02.104 "supported_io_types": { 00:15:02.104 "read": true, 00:15:02.104 "write": true, 00:15:02.104 "unmap": true, 00:15:02.104 "write_zeroes": true, 00:15:02.104 "flush": false, 00:15:02.104 "reset": true, 00:15:02.104 "compare": false, 00:15:02.104 "compare_and_write": false, 00:15:02.105 "abort": false, 00:15:02.105 "nvme_admin": false, 00:15:02.105 "nvme_io": false 00:15:02.105 }, 00:15:02.105 "driver_specific": { 00:15:02.105 "lvol": { 00:15:02.105 "lvol_store_uuid": "fc683dd6-ff23-4e60-a19f-a33fd7ada183", 00:15:02.105 "base_bdev": "aio_bdev", 00:15:02.105 "thin_provision": false, 00:15:02.105 "snapshot": false, 00:15:02.105 "clone": false, 00:15:02.105 "esnap_clone": false 00:15:02.105 } 00:15:02.105 } 00:15:02.105 } 00:15:02.105 ] 00:15:02.105 06:52:16 -- common/autotest_common.sh@895 -- # return 0 00:15:02.105 06:52:16 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:15:02.105 06:52:16 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:02.363 06:52:16 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:02.363 06:52:16 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:15:02.363 06:52:16 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:02.622 06:52:16 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:02.622 06:52:16 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:02.622 [2024-05-15 06:52:16.827568] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:02.622 06:52:16 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:15:02.622 06:52:16 -- common/autotest_common.sh@640 -- # local es=0 00:15:02.622 06:52:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:15:02.622 06:52:16 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.622 06:52:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:02.622 06:52:16 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.623 06:52:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:02.623 06:52:16 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.623 06:52:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:02.623 06:52:16 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.623 06:52:16 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:02.623 06:52:16 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:15:02.881 request: 00:15:02.881 { 00:15:02.881 "uuid": "fc683dd6-ff23-4e60-a19f-a33fd7ada183", 00:15:02.881 "method": "bdev_lvol_get_lvstores", 00:15:02.881 "req_id": 1 00:15:02.881 } 00:15:02.881 Got JSON-RPC error response 00:15:02.881 response: 00:15:02.881 { 00:15:02.881 "code": -19, 00:15:02.881 "message": "No such device" 00:15:02.881 } 00:15:02.881 06:52:17 -- common/autotest_common.sh@643 -- # es=1 00:15:02.881 06:52:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:02.881 06:52:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:02.881 06:52:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:02.881 06:52:17 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:03.140 aio_bdev 00:15:03.140 06:52:17 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev cda55e0e-c430-4926-bf9d-b0c1827726b4 00:15:03.140 06:52:17 -- common/autotest_common.sh@887 -- # local bdev_name=cda55e0e-c430-4926-bf9d-b0c1827726b4 00:15:03.140 06:52:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:03.140 06:52:17 -- common/autotest_common.sh@889 -- # local i 00:15:03.140 06:52:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:03.140 06:52:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:03.140 06:52:17 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:03.399 06:52:17 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cda55e0e-c430-4926-bf9d-b0c1827726b4 -t 2000 00:15:03.658 [ 00:15:03.658 { 00:15:03.659 "name": "cda55e0e-c430-4926-bf9d-b0c1827726b4", 00:15:03.659 "aliases": [ 00:15:03.659 "lvs/lvol" 00:15:03.659 ], 00:15:03.659 "product_name": "Logical Volume", 00:15:03.659 "block_size": 4096, 00:15:03.659 "num_blocks": 38912, 00:15:03.659 "uuid": "cda55e0e-c430-4926-bf9d-b0c1827726b4", 00:15:03.659 "assigned_rate_limits": { 00:15:03.659 "rw_ios_per_sec": 0, 00:15:03.659 "rw_mbytes_per_sec": 0, 00:15:03.659 "r_mbytes_per_sec": 0, 00:15:03.659 "w_mbytes_per_sec": 0 00:15:03.659 }, 00:15:03.659 "claimed": false, 00:15:03.659 "zoned": false, 00:15:03.659 "supported_io_types": { 00:15:03.659 "read": true, 00:15:03.659 "write": true, 00:15:03.659 "unmap": true, 00:15:03.659 "write_zeroes": true, 00:15:03.659 "flush": false, 00:15:03.659 "reset": true, 00:15:03.659 "compare": false, 00:15:03.659 "compare_and_write": false, 00:15:03.659 "abort": false, 00:15:03.659 "nvme_admin": false, 00:15:03.659 "nvme_io": false 00:15:03.659 }, 00:15:03.659 "driver_specific": { 00:15:03.659 "lvol": { 00:15:03.659 "lvol_store_uuid": "fc683dd6-ff23-4e60-a19f-a33fd7ada183", 00:15:03.659 "base_bdev": "aio_bdev", 00:15:03.659 "thin_provision": false, 00:15:03.659 "snapshot": false, 00:15:03.659 "clone": false, 00:15:03.659 "esnap_clone": false 00:15:03.659 } 00:15:03.659 } 00:15:03.659 } 00:15:03.659 ] 00:15:03.659 06:52:17 -- common/autotest_common.sh@895 -- # return 0 00:15:03.659 06:52:17 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:15:03.659 06:52:17 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:03.918 06:52:18 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:03.918 06:52:18 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:15:03.918 06:52:18 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:04.177 06:52:18 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:04.177 06:52:18 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cda55e0e-c430-4926-bf9d-b0c1827726b4 00:15:04.436 06:52:18 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc683dd6-ff23-4e60-a19f-a33fd7ada183 00:15:04.695 06:52:18 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:04.954 06:52:19 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:04.954 00:15:04.955 real 0m19.682s 00:15:04.955 user 0m49.438s 00:15:04.955 sys 0m4.778s 00:15:04.955 06:52:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:04.955 06:52:19 -- common/autotest_common.sh@10 -- # set +x 00:15:04.955 ************************************ 00:15:04.955 END TEST lvs_grow_dirty 00:15:04.955 ************************************ 00:15:04.955 06:52:19 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:04.955 06:52:19 -- common/autotest_common.sh@796 -- # type=--id 00:15:04.955 06:52:19 -- common/autotest_common.sh@797 -- # id=0 00:15:04.955 06:52:19 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:15:04.955 06:52:19 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:04.955 06:52:19 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:15:04.955 06:52:19 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:15:04.955 06:52:19 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:15:04.955 06:52:19 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:04.955 nvmf_trace.0 00:15:04.955 06:52:19 -- common/autotest_common.sh@811 -- # return 0 00:15:04.955 06:52:19 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:04.955 06:52:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:04.955 06:52:19 -- nvmf/common.sh@116 -- # sync 00:15:04.955 06:52:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:04.955 06:52:19 -- nvmf/common.sh@119 -- # set +e 00:15:04.955 06:52:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:04.955 06:52:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:04.955 rmmod nvme_tcp 00:15:04.955 rmmod nvme_fabrics 00:15:04.955 rmmod nvme_keyring 00:15:04.955 06:52:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:04.955 06:52:19 -- nvmf/common.sh@123 -- # set -e 00:15:04.955 06:52:19 -- nvmf/common.sh@124 -- # return 0 00:15:04.955 06:52:19 -- nvmf/common.sh@477 -- # '[' -n 485645 ']' 00:15:04.955 06:52:19 -- nvmf/common.sh@478 -- # killprocess 485645 00:15:04.955 06:52:19 -- common/autotest_common.sh@926 -- # '[' -z 485645 ']' 00:15:04.955 06:52:19 -- common/autotest_common.sh@930 -- # kill -0 485645 00:15:04.955 06:52:19 -- common/autotest_common.sh@931 -- # uname 00:15:04.955 06:52:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:04.955 06:52:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 485645 00:15:04.955 06:52:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:04.955 06:52:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:04.955 06:52:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 485645' 00:15:04.955 killing process with pid 485645 00:15:04.955 06:52:19 -- common/autotest_common.sh@945 -- # kill 485645 00:15:04.955 06:52:19 -- common/autotest_common.sh@950 -- # wait 485645 00:15:05.215 06:52:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:05.215 06:52:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:05.215 06:52:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:05.215 06:52:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.215 06:52:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:05.215 06:52:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.215 06:52:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.215 06:52:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.749 06:52:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:07.749 00:15:07.749 real 0m43.628s 00:15:07.749 user 1m13.101s 00:15:07.749 sys 0m8.770s 00:15:07.749 06:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.749 06:52:21 -- common/autotest_common.sh@10 -- # set +x 00:15:07.749 ************************************ 00:15:07.749 END TEST nvmf_lvs_grow 00:15:07.749 ************************************ 00:15:07.749 06:52:21 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:07.749 06:52:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:07.749 06:52:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:07.749 06:52:21 -- common/autotest_common.sh@10 -- # set +x 00:15:07.749 ************************************ 00:15:07.749 START TEST nvmf_bdev_io_wait 00:15:07.749 ************************************ 00:15:07.749 06:52:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:07.749 * Looking for test storage... 00:15:07.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.749 06:52:21 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.749 06:52:21 -- nvmf/common.sh@7 -- # uname -s 00:15:07.749 06:52:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.749 06:52:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.749 06:52:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.749 06:52:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.749 06:52:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.749 06:52:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.749 06:52:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.749 06:52:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.749 06:52:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.749 06:52:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.749 06:52:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.749 06:52:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.749 06:52:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.749 06:52:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.749 06:52:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.749 06:52:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.749 06:52:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.749 06:52:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.749 06:52:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.749 06:52:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.750 06:52:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.750 06:52:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.750 06:52:21 -- paths/export.sh@5 -- # export PATH 00:15:07.750 06:52:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.750 06:52:21 -- nvmf/common.sh@46 -- # : 0 00:15:07.750 06:52:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:07.750 06:52:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:07.750 06:52:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:07.750 06:52:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.750 06:52:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.750 06:52:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:07.750 06:52:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:07.750 06:52:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:07.750 06:52:21 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.750 06:52:21 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.750 06:52:21 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:07.750 06:52:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:07.750 06:52:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.750 06:52:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:07.750 06:52:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:07.750 06:52:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:07.750 06:52:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.750 06:52:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.750 06:52:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.750 06:52:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:07.750 06:52:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:07.750 06:52:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:07.750 06:52:21 -- common/autotest_common.sh@10 -- # set +x 00:15:10.278 06:52:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:10.278 06:52:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:10.278 06:52:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:10.278 06:52:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:10.278 06:52:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:10.278 06:52:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:10.278 06:52:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:10.278 06:52:23 -- nvmf/common.sh@294 -- # net_devs=() 00:15:10.278 06:52:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:10.278 06:52:23 -- nvmf/common.sh@295 -- # e810=() 00:15:10.278 06:52:23 -- nvmf/common.sh@295 -- # local -ga e810 00:15:10.278 06:52:23 -- nvmf/common.sh@296 -- # x722=() 00:15:10.278 06:52:23 -- nvmf/common.sh@296 -- # local -ga x722 00:15:10.278 06:52:23 -- nvmf/common.sh@297 -- # mlx=() 00:15:10.278 06:52:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:10.278 06:52:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.278 06:52:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.278 06:52:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.278 06:52:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.278 06:52:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.278 06:52:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.278 06:52:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.278 06:52:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.278 06:52:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.278 06:52:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.278 06:52:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.278 06:52:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:10.278 06:52:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:10.278 06:52:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:10.278 06:52:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:10.278 06:52:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:10.278 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:10.278 06:52:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:10.278 06:52:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:10.278 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:10.278 06:52:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:10.278 06:52:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:10.278 06:52:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.278 06:52:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:10.278 06:52:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.278 06:52:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:10.278 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:10.278 06:52:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.278 06:52:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:10.278 06:52:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.278 06:52:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:10.278 06:52:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.278 06:52:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:10.278 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:10.278 06:52:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.278 06:52:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:10.278 06:52:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:10.278 06:52:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:10.278 06:52:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:10.278 06:52:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.278 06:52:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.278 06:52:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.278 06:52:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:10.278 06:52:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.278 06:52:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.278 06:52:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:10.278 06:52:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.278 06:52:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.278 06:52:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:10.278 06:52:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:10.278 06:52:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.278 06:52:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.278 06:52:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.278 06:52:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.278 06:52:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:10.278 06:52:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.278 06:52:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.278 06:52:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.278 06:52:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:10.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:15:10.278 00:15:10.278 --- 10.0.0.2 ping statistics --- 00:15:10.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.278 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:15:10.278 06:52:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:15:10.279 00:15:10.279 --- 10.0.0.1 ping statistics --- 00:15:10.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.279 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:15:10.279 06:52:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.279 06:52:24 -- nvmf/common.sh@410 -- # return 0 00:15:10.279 06:52:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:10.279 06:52:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.279 06:52:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:10.279 06:52:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:10.279 06:52:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.279 06:52:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:10.279 06:52:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:10.279 06:52:24 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:10.279 06:52:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:10.279 06:52:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:10.279 06:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.279 06:52:24 -- nvmf/common.sh@469 -- # nvmfpid=488616 00:15:10.279 06:52:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:10.279 06:52:24 -- nvmf/common.sh@470 -- # waitforlisten 488616 00:15:10.279 06:52:24 -- common/autotest_common.sh@819 -- # '[' -z 488616 ']' 00:15:10.279 06:52:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.279 06:52:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:10.279 06:52:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.279 06:52:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:10.279 06:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.279 [2024-05-15 06:52:24.172887] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:10.279 [2024-05-15 06:52:24.172961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.279 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.279 [2024-05-15 06:52:24.251050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.279 [2024-05-15 06:52:24.363997] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:10.279 [2024-05-15 06:52:24.364162] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.279 [2024-05-15 06:52:24.364179] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.279 [2024-05-15 06:52:24.364192] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.279 [2024-05-15 06:52:24.364326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.279 [2024-05-15 06:52:24.364360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.279 [2024-05-15 06:52:24.364419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.279 [2024-05-15 06:52:24.364421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.279 06:52:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:10.279 06:52:24 -- common/autotest_common.sh@852 -- # return 0 00:15:10.279 06:52:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:10.279 06:52:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:10.279 06:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.279 06:52:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.279 06:52:24 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:10.279 06:52:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.279 06:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.279 06:52:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.279 06:52:24 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:10.279 06:52:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.279 06:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.279 06:52:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.279 06:52:24 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:10.279 06:52:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.279 06:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.279 [2024-05-15 06:52:24.509054] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.537 06:52:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.537 06:52:24 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:10.537 06:52:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.537 06:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.537 Malloc0 00:15:10.537 06:52:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.537 06:52:24 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:10.537 06:52:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.537 06:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.537 06:52:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.537 06:52:24 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:10.537 06:52:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.537 06:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.537 06:52:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.537 06:52:24 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.537 06:52:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.537 06:52:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.537 [2024-05-15 06:52:24.572644] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.537 06:52:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.537 06:52:24 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=488643 00:15:10.537 06:52:24 -- target/bdev_io_wait.sh@30 -- # READ_PID=488644 00:15:10.537 06:52:24 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:10.537 06:52:24 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:10.537 06:52:24 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=488646 00:15:10.537 06:52:24 -- nvmf/common.sh@520 -- # config=() 00:15:10.537 06:52:24 -- nvmf/common.sh@520 -- # local subsystem config 00:15:10.537 06:52:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:10.537 06:52:24 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:10.537 06:52:24 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:10.537 06:52:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:10.537 { 00:15:10.537 "params": { 00:15:10.537 "name": "Nvme$subsystem", 00:15:10.537 "trtype": "$TEST_TRANSPORT", 00:15:10.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.537 "adrfam": "ipv4", 00:15:10.537 "trsvcid": "$NVMF_PORT", 00:15:10.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.537 "hdgst": ${hdgst:-false}, 00:15:10.537 "ddgst": ${ddgst:-false} 00:15:10.537 }, 00:15:10.537 "method": "bdev_nvme_attach_controller" 00:15:10.537 } 00:15:10.537 EOF 00:15:10.537 )") 00:15:10.537 06:52:24 -- nvmf/common.sh@520 -- # config=() 00:15:10.537 06:52:24 -- nvmf/common.sh@520 -- # local subsystem config 00:15:10.538 06:52:24 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=488649 00:15:10.538 06:52:24 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:10.538 06:52:24 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:10.538 06:52:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:10.538 06:52:24 -- target/bdev_io_wait.sh@35 -- # sync 00:15:10.538 06:52:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:10.538 { 00:15:10.538 "params": { 00:15:10.538 "name": "Nvme$subsystem", 00:15:10.538 "trtype": "$TEST_TRANSPORT", 00:15:10.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.538 "adrfam": "ipv4", 00:15:10.538 "trsvcid": "$NVMF_PORT", 00:15:10.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.538 "hdgst": ${hdgst:-false}, 00:15:10.538 "ddgst": ${ddgst:-false} 00:15:10.538 }, 00:15:10.538 "method": "bdev_nvme_attach_controller" 00:15:10.538 } 00:15:10.538 EOF 00:15:10.538 )") 00:15:10.538 06:52:24 -- nvmf/common.sh@520 -- # config=() 00:15:10.538 06:52:24 -- nvmf/common.sh@520 -- # local subsystem config 00:15:10.538 06:52:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:10.538 06:52:24 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:10.538 06:52:24 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:10.538 06:52:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:10.538 { 00:15:10.538 "params": { 00:15:10.538 "name": "Nvme$subsystem", 00:15:10.538 "trtype": "$TEST_TRANSPORT", 00:15:10.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.538 "adrfam": "ipv4", 00:15:10.538 "trsvcid": "$NVMF_PORT", 00:15:10.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.538 "hdgst": ${hdgst:-false}, 00:15:10.538 "ddgst": ${ddgst:-false} 00:15:10.538 }, 00:15:10.538 "method": "bdev_nvme_attach_controller" 00:15:10.538 } 00:15:10.538 EOF 00:15:10.538 )") 00:15:10.538 06:52:24 -- nvmf/common.sh@520 -- # config=() 00:15:10.538 06:52:24 -- nvmf/common.sh@542 -- # cat 00:15:10.538 06:52:24 -- nvmf/common.sh@520 -- # local subsystem config 00:15:10.538 06:52:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:10.538 06:52:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:10.538 { 00:15:10.538 "params": { 00:15:10.538 "name": "Nvme$subsystem", 00:15:10.538 "trtype": "$TEST_TRANSPORT", 00:15:10.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:10.538 "adrfam": "ipv4", 00:15:10.538 "trsvcid": "$NVMF_PORT", 00:15:10.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:10.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:10.538 "hdgst": ${hdgst:-false}, 00:15:10.538 "ddgst": ${ddgst:-false} 00:15:10.538 }, 00:15:10.538 "method": "bdev_nvme_attach_controller" 00:15:10.538 } 00:15:10.538 EOF 00:15:10.538 )") 00:15:10.538 06:52:24 -- nvmf/common.sh@542 -- # cat 00:15:10.538 06:52:24 -- nvmf/common.sh@542 -- # cat 00:15:10.538 06:52:24 -- target/bdev_io_wait.sh@37 -- # wait 488643 00:15:10.538 06:52:24 -- nvmf/common.sh@542 -- # cat 00:15:10.538 06:52:24 -- nvmf/common.sh@544 -- # jq . 00:15:10.538 06:52:24 -- nvmf/common.sh@544 -- # jq . 00:15:10.538 06:52:24 -- nvmf/common.sh@544 -- # jq . 00:15:10.538 06:52:24 -- nvmf/common.sh@544 -- # jq . 00:15:10.538 06:52:24 -- nvmf/common.sh@545 -- # IFS=, 00:15:10.538 06:52:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:10.538 "params": { 00:15:10.538 "name": "Nvme1", 00:15:10.538 "trtype": "tcp", 00:15:10.538 "traddr": "10.0.0.2", 00:15:10.538 "adrfam": "ipv4", 00:15:10.538 "trsvcid": "4420", 00:15:10.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.538 "hdgst": false, 00:15:10.538 "ddgst": false 00:15:10.538 }, 00:15:10.538 "method": "bdev_nvme_attach_controller" 00:15:10.538 }' 00:15:10.538 06:52:24 -- nvmf/common.sh@545 -- # IFS=, 00:15:10.538 06:52:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:10.538 "params": { 00:15:10.538 "name": "Nvme1", 00:15:10.538 "trtype": "tcp", 00:15:10.538 "traddr": "10.0.0.2", 00:15:10.538 "adrfam": "ipv4", 00:15:10.538 "trsvcid": "4420", 00:15:10.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.538 "hdgst": false, 00:15:10.538 "ddgst": false 00:15:10.538 }, 00:15:10.538 "method": "bdev_nvme_attach_controller" 00:15:10.538 }' 00:15:10.538 06:52:24 -- nvmf/common.sh@545 -- # IFS=, 00:15:10.538 06:52:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:10.538 "params": { 00:15:10.538 "name": "Nvme1", 00:15:10.538 "trtype": "tcp", 00:15:10.538 "traddr": "10.0.0.2", 00:15:10.538 "adrfam": "ipv4", 00:15:10.538 "trsvcid": "4420", 00:15:10.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.538 "hdgst": false, 00:15:10.538 "ddgst": false 00:15:10.538 }, 00:15:10.538 "method": "bdev_nvme_attach_controller" 00:15:10.538 }' 00:15:10.538 06:52:24 -- nvmf/common.sh@545 -- # IFS=, 00:15:10.538 06:52:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:10.538 "params": { 00:15:10.538 "name": "Nvme1", 00:15:10.538 "trtype": "tcp", 00:15:10.538 "traddr": "10.0.0.2", 00:15:10.538 "adrfam": "ipv4", 00:15:10.538 "trsvcid": "4420", 00:15:10.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:10.538 "hdgst": false, 00:15:10.538 "ddgst": false 00:15:10.538 }, 00:15:10.538 "method": "bdev_nvme_attach_controller" 00:15:10.538 }' 00:15:10.538 [2024-05-15 06:52:24.616402] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:10.538 [2024-05-15 06:52:24.616402] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:10.538 [2024-05-15 06:52:24.616402] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:10.538 [2024-05-15 06:52:24.616494] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 06:52:24.616494] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 06:52:24.616495] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:10.538 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:10.538 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:10.538 [2024-05-15 06:52:24.620891] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:10.538 [2024-05-15 06:52:24.621002] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:10.538 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.538 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.796 [2024-05-15 06:52:24.803456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.796 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.796 [2024-05-15 06:52:24.900267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:10.796 [2024-05-15 06:52:24.905510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.796 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.796 [2024-05-15 06:52:25.004097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:10.796 [2024-05-15 06:52:25.009228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.055 [2024-05-15 06:52:25.084690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.055 [2024-05-15 06:52:25.109284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:11.055 [2024-05-15 06:52:25.178487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:11.055 Running I/O for 1 seconds... 00:15:11.313 Running I/O for 1 seconds... 00:15:11.313 Running I/O for 1 seconds... 00:15:11.313 Running I/O for 1 seconds... 00:15:12.310 00:15:12.310 Latency(us) 00:15:12.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.311 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:12.311 Nvme1n1 : 1.01 12982.32 50.71 0.00 0.00 9822.28 6553.60 19126.80 00:15:12.311 =================================================================================================================== 00:15:12.311 Total : 12982.32 50.71 0.00 0.00 9822.28 6553.60 19126.80 00:15:12.311 00:15:12.311 Latency(us) 00:15:12.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.311 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:12.311 Nvme1n1 : 1.00 199812.26 780.52 0.00 0.00 638.12 262.45 825.27 00:15:12.311 =================================================================================================================== 00:15:12.311 Total : 199812.26 780.52 0.00 0.00 638.12 262.45 825.27 00:15:12.311 00:15:12.311 Latency(us) 00:15:12.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.311 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:12.311 Nvme1n1 : 1.01 9986.02 39.01 0.00 0.00 12773.04 5995.33 22816.24 00:15:12.311 =================================================================================================================== 00:15:12.311 Total : 9986.02 39.01 0.00 0.00 12773.04 5995.33 22816.24 00:15:12.311 00:15:12.311 Latency(us) 00:15:12.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.311 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:12.311 Nvme1n1 : 1.09 842.93 3.29 0.00 0.00 150983.06 8252.68 612057.69 00:15:12.311 =================================================================================================================== 00:15:12.311 Total : 842.93 3.29 0.00 0.00 150983.06 8252.68 612057.69 00:15:12.569 06:52:26 -- target/bdev_io_wait.sh@38 -- # wait 488644 00:15:12.569 06:52:26 -- target/bdev_io_wait.sh@39 -- # wait 488646 00:15:12.569 06:52:26 -- target/bdev_io_wait.sh@40 -- # wait 488649 00:15:12.569 06:52:26 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.569 06:52:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:12.569 06:52:26 -- common/autotest_common.sh@10 -- # set +x 00:15:12.569 06:52:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:12.569 06:52:26 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:12.569 06:52:26 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:12.569 06:52:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:12.569 06:52:26 -- nvmf/common.sh@116 -- # sync 00:15:12.569 06:52:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:12.569 06:52:26 -- nvmf/common.sh@119 -- # set +e 00:15:12.569 06:52:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:12.569 06:52:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:12.569 rmmod nvme_tcp 00:15:12.569 rmmod nvme_fabrics 00:15:12.569 rmmod nvme_keyring 00:15:12.826 06:52:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:12.826 06:52:26 -- nvmf/common.sh@123 -- # set -e 00:15:12.826 06:52:26 -- nvmf/common.sh@124 -- # return 0 00:15:12.826 06:52:26 -- nvmf/common.sh@477 -- # '[' -n 488616 ']' 00:15:12.826 06:52:26 -- nvmf/common.sh@478 -- # killprocess 488616 00:15:12.826 06:52:26 -- common/autotest_common.sh@926 -- # '[' -z 488616 ']' 00:15:12.826 06:52:26 -- common/autotest_common.sh@930 -- # kill -0 488616 00:15:12.826 06:52:26 -- common/autotest_common.sh@931 -- # uname 00:15:12.826 06:52:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:12.826 06:52:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 488616 00:15:12.826 06:52:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:12.826 06:52:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:12.826 06:52:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 488616' 00:15:12.826 killing process with pid 488616 00:15:12.826 06:52:26 -- common/autotest_common.sh@945 -- # kill 488616 00:15:12.826 06:52:26 -- common/autotest_common.sh@950 -- # wait 488616 00:15:13.133 06:52:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:13.133 06:52:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:13.133 06:52:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:13.133 06:52:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:13.133 06:52:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:13.133 06:52:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.133 06:52:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:13.133 06:52:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.037 06:52:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:15.037 00:15:15.037 real 0m7.613s 00:15:15.037 user 0m16.126s 00:15:15.037 sys 0m3.684s 00:15:15.037 06:52:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.037 06:52:29 -- common/autotest_common.sh@10 -- # set +x 00:15:15.037 ************************************ 00:15:15.037 END TEST nvmf_bdev_io_wait 00:15:15.037 ************************************ 00:15:15.037 06:52:29 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:15.037 06:52:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:15.037 06:52:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:15.037 06:52:29 -- common/autotest_common.sh@10 -- # set +x 00:15:15.037 ************************************ 00:15:15.037 START TEST nvmf_queue_depth 00:15:15.037 ************************************ 00:15:15.037 06:52:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:15.037 * Looking for test storage... 00:15:15.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:15.037 06:52:29 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.037 06:52:29 -- nvmf/common.sh@7 -- # uname -s 00:15:15.037 06:52:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.037 06:52:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.037 06:52:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.037 06:52:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.037 06:52:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.037 06:52:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.037 06:52:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.037 06:52:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.037 06:52:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.037 06:52:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.037 06:52:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.037 06:52:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.037 06:52:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.037 06:52:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.037 06:52:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.037 06:52:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.037 06:52:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.037 06:52:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.037 06:52:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.037 06:52:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.037 06:52:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.037 06:52:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.037 06:52:29 -- paths/export.sh@5 -- # export PATH 00:15:15.037 06:52:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.037 06:52:29 -- nvmf/common.sh@46 -- # : 0 00:15:15.037 06:52:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:15.037 06:52:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:15.037 06:52:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:15.037 06:52:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.037 06:52:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.037 06:52:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:15.037 06:52:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:15.037 06:52:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:15.037 06:52:29 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:15.037 06:52:29 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:15.037 06:52:29 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:15.037 06:52:29 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:15.037 06:52:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:15.037 06:52:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.037 06:52:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:15.037 06:52:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:15.037 06:52:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:15.037 06:52:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.037 06:52:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.037 06:52:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.037 06:52:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:15.037 06:52:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:15.037 06:52:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:15.037 06:52:29 -- common/autotest_common.sh@10 -- # set +x 00:15:17.569 06:52:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:17.569 06:52:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:17.569 06:52:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:17.569 06:52:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:17.569 06:52:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:17.569 06:52:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:17.569 06:52:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:17.569 06:52:31 -- nvmf/common.sh@294 -- # net_devs=() 00:15:17.569 06:52:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:17.569 06:52:31 -- nvmf/common.sh@295 -- # e810=() 00:15:17.569 06:52:31 -- nvmf/common.sh@295 -- # local -ga e810 00:15:17.569 06:52:31 -- nvmf/common.sh@296 -- # x722=() 00:15:17.569 06:52:31 -- nvmf/common.sh@296 -- # local -ga x722 00:15:17.569 06:52:31 -- nvmf/common.sh@297 -- # mlx=() 00:15:17.569 06:52:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:17.569 06:52:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.569 06:52:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.569 06:52:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.569 06:52:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.569 06:52:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.569 06:52:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.569 06:52:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.569 06:52:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.569 06:52:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.569 06:52:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.569 06:52:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.569 06:52:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:17.569 06:52:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:17.569 06:52:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:17.569 06:52:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:17.569 06:52:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:17.569 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:17.569 06:52:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:17.569 06:52:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:17.569 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:17.569 06:52:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:17.569 06:52:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:17.569 06:52:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:17.569 06:52:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.569 06:52:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:17.569 06:52:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.569 06:52:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:17.569 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:17.569 06:52:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.569 06:52:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:17.569 06:52:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.569 06:52:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:17.569 06:52:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.569 06:52:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:17.569 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:17.569 06:52:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.569 06:52:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:17.569 06:52:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:17.569 06:52:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:17.570 06:52:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:17.570 06:52:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:17.570 06:52:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.570 06:52:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.570 06:52:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.570 06:52:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:17.570 06:52:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.570 06:52:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.570 06:52:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:17.570 06:52:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.570 06:52:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.570 06:52:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:17.570 06:52:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:17.570 06:52:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.570 06:52:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.570 06:52:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.570 06:52:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.570 06:52:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:17.570 06:52:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.828 06:52:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.828 06:52:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.828 06:52:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:17.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:15:17.828 00:15:17.828 --- 10.0.0.2 ping statistics --- 00:15:17.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.828 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:15:17.828 06:52:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:15:17.828 00:15:17.828 --- 10.0.0.1 ping statistics --- 00:15:17.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.828 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:15:17.828 06:52:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.828 06:52:31 -- nvmf/common.sh@410 -- # return 0 00:15:17.828 06:52:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:17.828 06:52:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.828 06:52:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:17.828 06:52:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:17.828 06:52:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.828 06:52:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:17.828 06:52:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:17.828 06:52:31 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:17.828 06:52:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:17.828 06:52:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:17.828 06:52:31 -- common/autotest_common.sh@10 -- # set +x 00:15:17.828 06:52:31 -- nvmf/common.sh@469 -- # nvmfpid=491308 00:15:17.828 06:52:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:17.828 06:52:31 -- nvmf/common.sh@470 -- # waitforlisten 491308 00:15:17.828 06:52:31 -- common/autotest_common.sh@819 -- # '[' -z 491308 ']' 00:15:17.828 06:52:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.828 06:52:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.828 06:52:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.828 06:52:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.828 06:52:31 -- common/autotest_common.sh@10 -- # set +x 00:15:17.828 [2024-05-15 06:52:31.924997] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:17.828 [2024-05-15 06:52:31.925081] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.828 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.828 [2024-05-15 06:52:32.007014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.086 [2024-05-15 06:52:32.120885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:18.086 [2024-05-15 06:52:32.121076] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.086 [2024-05-15 06:52:32.121105] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.086 [2024-05-15 06:52:32.121120] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.086 [2024-05-15 06:52:32.121151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.652 06:52:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:18.652 06:52:32 -- common/autotest_common.sh@852 -- # return 0 00:15:18.652 06:52:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:18.652 06:52:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:18.652 06:52:32 -- common/autotest_common.sh@10 -- # set +x 00:15:18.652 06:52:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.652 06:52:32 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:18.652 06:52:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.652 06:52:32 -- common/autotest_common.sh@10 -- # set +x 00:15:18.652 [2024-05-15 06:52:32.872895] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.652 06:52:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.652 06:52:32 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:18.652 06:52:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.652 06:52:32 -- common/autotest_common.sh@10 -- # set +x 00:15:18.911 Malloc0 00:15:18.911 06:52:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.911 06:52:32 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:18.911 06:52:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.911 06:52:32 -- common/autotest_common.sh@10 -- # set +x 00:15:18.911 06:52:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.911 06:52:32 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:18.911 06:52:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.911 06:52:32 -- common/autotest_common.sh@10 -- # set +x 00:15:18.911 06:52:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.911 06:52:32 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:18.911 06:52:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:18.911 06:52:32 -- common/autotest_common.sh@10 -- # set +x 00:15:18.911 [2024-05-15 06:52:32.940790] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:18.911 06:52:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:18.911 06:52:32 -- target/queue_depth.sh@30 -- # bdevperf_pid=491464 00:15:18.911 06:52:32 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:18.911 06:52:32 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:18.911 06:52:32 -- target/queue_depth.sh@33 -- # waitforlisten 491464 /var/tmp/bdevperf.sock 00:15:18.911 06:52:32 -- common/autotest_common.sh@819 -- # '[' -z 491464 ']' 00:15:18.911 06:52:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.911 06:52:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:18.911 06:52:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.911 06:52:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:18.911 06:52:32 -- common/autotest_common.sh@10 -- # set +x 00:15:18.911 [2024-05-15 06:52:32.981447] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:18.911 [2024-05-15 06:52:32.981507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491464 ] 00:15:18.911 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.911 [2024-05-15 06:52:33.053289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.168 [2024-05-15 06:52:33.167580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.732 06:52:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:19.732 06:52:33 -- common/autotest_common.sh@852 -- # return 0 00:15:19.732 06:52:33 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:19.732 06:52:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.732 06:52:33 -- common/autotest_common.sh@10 -- # set +x 00:15:19.989 NVMe0n1 00:15:19.989 06:52:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:19.989 06:52:34 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:20.247 Running I/O for 10 seconds... 00:15:30.218 00:15:30.218 Latency(us) 00:15:30.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.218 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:30.218 Verification LBA range: start 0x0 length 0x4000 00:15:30.218 NVMe0n1 : 10.07 12647.97 49.41 0.00 0.00 80640.63 14951.92 59030.95 00:15:30.218 =================================================================================================================== 00:15:30.218 Total : 12647.97 49.41 0.00 0.00 80640.63 14951.92 59030.95 00:15:30.218 0 00:15:30.218 06:52:44 -- target/queue_depth.sh@39 -- # killprocess 491464 00:15:30.218 06:52:44 -- common/autotest_common.sh@926 -- # '[' -z 491464 ']' 00:15:30.218 06:52:44 -- common/autotest_common.sh@930 -- # kill -0 491464 00:15:30.218 06:52:44 -- common/autotest_common.sh@931 -- # uname 00:15:30.218 06:52:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:30.218 06:52:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 491464 00:15:30.218 06:52:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:30.218 06:52:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:30.218 06:52:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 491464' 00:15:30.218 killing process with pid 491464 00:15:30.218 06:52:44 -- common/autotest_common.sh@945 -- # kill 491464 00:15:30.218 Received shutdown signal, test time was about 10.000000 seconds 00:15:30.218 00:15:30.218 Latency(us) 00:15:30.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.218 =================================================================================================================== 00:15:30.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:30.218 06:52:44 -- common/autotest_common.sh@950 -- # wait 491464 00:15:30.475 06:52:44 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:30.475 06:52:44 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:30.475 06:52:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:30.475 06:52:44 -- nvmf/common.sh@116 -- # sync 00:15:30.475 06:52:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:30.475 06:52:44 -- nvmf/common.sh@119 -- # set +e 00:15:30.475 06:52:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:30.475 06:52:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:30.475 rmmod nvme_tcp 00:15:30.475 rmmod nvme_fabrics 00:15:30.475 rmmod nvme_keyring 00:15:30.732 06:52:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:30.732 06:52:44 -- nvmf/common.sh@123 -- # set -e 00:15:30.732 06:52:44 -- nvmf/common.sh@124 -- # return 0 00:15:30.732 06:52:44 -- nvmf/common.sh@477 -- # '[' -n 491308 ']' 00:15:30.732 06:52:44 -- nvmf/common.sh@478 -- # killprocess 491308 00:15:30.732 06:52:44 -- common/autotest_common.sh@926 -- # '[' -z 491308 ']' 00:15:30.732 06:52:44 -- common/autotest_common.sh@930 -- # kill -0 491308 00:15:30.732 06:52:44 -- common/autotest_common.sh@931 -- # uname 00:15:30.732 06:52:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:30.732 06:52:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 491308 00:15:30.732 06:52:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:30.732 06:52:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:30.732 06:52:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 491308' 00:15:30.732 killing process with pid 491308 00:15:30.732 06:52:44 -- common/autotest_common.sh@945 -- # kill 491308 00:15:30.732 06:52:44 -- common/autotest_common.sh@950 -- # wait 491308 00:15:30.989 06:52:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:30.989 06:52:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:30.989 06:52:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:30.989 06:52:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.989 06:52:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:30.989 06:52:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.989 06:52:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.989 06:52:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.895 06:52:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:32.895 00:15:32.895 real 0m17.952s 00:15:32.895 user 0m25.184s 00:15:32.895 sys 0m3.482s 00:15:32.895 06:52:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.895 06:52:47 -- common/autotest_common.sh@10 -- # set +x 00:15:32.895 ************************************ 00:15:32.895 END TEST nvmf_queue_depth 00:15:32.895 ************************************ 00:15:32.895 06:52:47 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:32.895 06:52:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:32.895 06:52:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:32.895 06:52:47 -- common/autotest_common.sh@10 -- # set +x 00:15:33.153 ************************************ 00:15:33.153 START TEST nvmf_multipath 00:15:33.153 ************************************ 00:15:33.153 06:52:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:33.153 * Looking for test storage... 00:15:33.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.153 06:52:47 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.153 06:52:47 -- nvmf/common.sh@7 -- # uname -s 00:15:33.153 06:52:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.153 06:52:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.153 06:52:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.153 06:52:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.153 06:52:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.153 06:52:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.153 06:52:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.153 06:52:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.153 06:52:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.153 06:52:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.153 06:52:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.153 06:52:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.153 06:52:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.153 06:52:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.153 06:52:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.153 06:52:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.153 06:52:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.153 06:52:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.153 06:52:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.154 06:52:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.154 06:52:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.154 06:52:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.154 06:52:47 -- paths/export.sh@5 -- # export PATH 00:15:33.154 06:52:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.154 06:52:47 -- nvmf/common.sh@46 -- # : 0 00:15:33.154 06:52:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:33.154 06:52:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:33.154 06:52:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:33.154 06:52:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.154 06:52:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.154 06:52:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:33.154 06:52:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:33.154 06:52:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:33.154 06:52:47 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.154 06:52:47 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.154 06:52:47 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:33.154 06:52:47 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:33.154 06:52:47 -- target/multipath.sh@43 -- # nvmftestinit 00:15:33.154 06:52:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:33.154 06:52:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.154 06:52:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:33.154 06:52:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:33.154 06:52:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:33.154 06:52:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.154 06:52:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.154 06:52:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.154 06:52:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:33.154 06:52:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:33.154 06:52:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:33.154 06:52:47 -- common/autotest_common.sh@10 -- # set +x 00:15:35.736 06:52:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:35.736 06:52:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:35.736 06:52:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:35.736 06:52:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:35.736 06:52:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:35.736 06:52:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:35.736 06:52:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:35.736 06:52:49 -- nvmf/common.sh@294 -- # net_devs=() 00:15:35.736 06:52:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:35.736 06:52:49 -- nvmf/common.sh@295 -- # e810=() 00:15:35.736 06:52:49 -- nvmf/common.sh@295 -- # local -ga e810 00:15:35.736 06:52:49 -- nvmf/common.sh@296 -- # x722=() 00:15:35.736 06:52:49 -- nvmf/common.sh@296 -- # local -ga x722 00:15:35.736 06:52:49 -- nvmf/common.sh@297 -- # mlx=() 00:15:35.736 06:52:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:35.736 06:52:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.736 06:52:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.736 06:52:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.736 06:52:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.736 06:52:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.736 06:52:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.736 06:52:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.736 06:52:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.736 06:52:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.736 06:52:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.736 06:52:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.736 06:52:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:35.736 06:52:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:35.737 06:52:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:35.737 06:52:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:35.737 06:52:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:35.737 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:35.737 06:52:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:35.737 06:52:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:35.737 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:35.737 06:52:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:35.737 06:52:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:35.737 06:52:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.737 06:52:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:35.737 06:52:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.737 06:52:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:35.737 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:35.737 06:52:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.737 06:52:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:35.737 06:52:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.737 06:52:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:35.737 06:52:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.737 06:52:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:35.737 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:35.737 06:52:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.737 06:52:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:35.737 06:52:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:35.737 06:52:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:35.737 06:52:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.737 06:52:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.737 06:52:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.737 06:52:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:35.737 06:52:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.737 06:52:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.737 06:52:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:35.737 06:52:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.737 06:52:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.737 06:52:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:35.737 06:52:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:35.737 06:52:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.737 06:52:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.737 06:52:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.737 06:52:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.737 06:52:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:35.737 06:52:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.737 06:52:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.737 06:52:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.737 06:52:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:35.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:15:35.737 00:15:35.737 --- 10.0.0.2 ping statistics --- 00:15:35.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.737 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:15:35.737 06:52:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:15:35.737 00:15:35.737 --- 10.0.0.1 ping statistics --- 00:15:35.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.737 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:15:35.737 06:52:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.737 06:52:49 -- nvmf/common.sh@410 -- # return 0 00:15:35.737 06:52:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:35.737 06:52:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.737 06:52:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.737 06:52:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:35.737 06:52:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:35.737 06:52:49 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:35.737 06:52:49 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:35.737 only one NIC for nvmf test 00:15:35.737 06:52:49 -- target/multipath.sh@47 -- # nvmftestfini 00:15:35.737 06:52:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:35.737 06:52:49 -- nvmf/common.sh@116 -- # sync 00:15:35.737 06:52:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:35.737 06:52:49 -- nvmf/common.sh@119 -- # set +e 00:15:35.737 06:52:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:35.737 06:52:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:35.737 rmmod nvme_tcp 00:15:35.737 rmmod nvme_fabrics 00:15:35.737 rmmod nvme_keyring 00:15:35.737 06:52:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:35.737 06:52:49 -- nvmf/common.sh@123 -- # set -e 00:15:35.737 06:52:49 -- nvmf/common.sh@124 -- # return 0 00:15:35.737 06:52:49 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:35.737 06:52:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:35.737 06:52:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:35.737 06:52:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.737 06:52:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:35.737 06:52:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.737 06:52:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.737 06:52:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.643 06:52:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:37.643 06:52:51 -- target/multipath.sh@48 -- # exit 0 00:15:37.643 06:52:51 -- target/multipath.sh@1 -- # nvmftestfini 00:15:37.643 06:52:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:37.643 06:52:51 -- nvmf/common.sh@116 -- # sync 00:15:37.643 06:52:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:37.643 06:52:51 -- nvmf/common.sh@119 -- # set +e 00:15:37.643 06:52:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:37.643 06:52:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:37.643 06:52:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:37.643 06:52:51 -- nvmf/common.sh@123 -- # set -e 00:15:37.643 06:52:51 -- nvmf/common.sh@124 -- # return 0 00:15:37.643 06:52:51 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:37.643 06:52:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:37.643 06:52:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:37.643 06:52:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:37.643 06:52:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:37.643 06:52:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:37.643 06:52:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.643 06:52:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.643 06:52:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.643 06:52:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:37.643 00:15:37.643 real 0m4.734s 00:15:37.643 user 0m0.975s 00:15:37.643 sys 0m1.758s 00:15:37.643 06:52:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.643 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:15:37.643 ************************************ 00:15:37.643 END TEST nvmf_multipath 00:15:37.643 ************************************ 00:15:37.902 06:52:51 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:37.902 06:52:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:37.902 06:52:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:37.902 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:15:37.902 ************************************ 00:15:37.902 START TEST nvmf_zcopy 00:15:37.902 ************************************ 00:15:37.902 06:52:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:37.902 * Looking for test storage... 00:15:37.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.902 06:52:51 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.902 06:52:51 -- nvmf/common.sh@7 -- # uname -s 00:15:37.902 06:52:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.902 06:52:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.902 06:52:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.902 06:52:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.903 06:52:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.903 06:52:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.903 06:52:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.903 06:52:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.903 06:52:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.903 06:52:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.903 06:52:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.903 06:52:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.903 06:52:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.903 06:52:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.903 06:52:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.903 06:52:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.903 06:52:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.903 06:52:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.903 06:52:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.903 06:52:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.903 06:52:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.903 06:52:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.903 06:52:51 -- paths/export.sh@5 -- # export PATH 00:15:37.903 06:52:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.903 06:52:51 -- nvmf/common.sh@46 -- # : 0 00:15:37.903 06:52:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:37.903 06:52:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:37.903 06:52:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:37.903 06:52:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.903 06:52:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.903 06:52:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:37.903 06:52:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:37.903 06:52:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:37.903 06:52:51 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:37.903 06:52:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:37.903 06:52:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.903 06:52:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:37.903 06:52:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:37.903 06:52:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:37.903 06:52:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.903 06:52:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.903 06:52:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.903 06:52:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:37.903 06:52:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:37.903 06:52:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:37.903 06:52:51 -- common/autotest_common.sh@10 -- # set +x 00:15:40.432 06:52:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:40.432 06:52:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:40.432 06:52:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:40.432 06:52:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:40.432 06:52:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:40.432 06:52:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:40.432 06:52:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:40.432 06:52:54 -- nvmf/common.sh@294 -- # net_devs=() 00:15:40.432 06:52:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:40.432 06:52:54 -- nvmf/common.sh@295 -- # e810=() 00:15:40.432 06:52:54 -- nvmf/common.sh@295 -- # local -ga e810 00:15:40.432 06:52:54 -- nvmf/common.sh@296 -- # x722=() 00:15:40.432 06:52:54 -- nvmf/common.sh@296 -- # local -ga x722 00:15:40.432 06:52:54 -- nvmf/common.sh@297 -- # mlx=() 00:15:40.432 06:52:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:40.432 06:52:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:40.432 06:52:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:40.432 06:52:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:40.432 06:52:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:40.432 06:52:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:40.432 06:52:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:40.432 06:52:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:40.432 06:52:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:40.432 06:52:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:40.432 06:52:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:40.432 06:52:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:40.432 06:52:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:40.432 06:52:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:40.432 06:52:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:40.432 06:52:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:40.432 06:52:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:40.432 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:40.432 06:52:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:40.432 06:52:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:40.432 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:40.432 06:52:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:40.432 06:52:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:40.432 06:52:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.432 06:52:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:40.432 06:52:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.432 06:52:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:40.432 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:40.432 06:52:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.432 06:52:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:40.432 06:52:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:40.432 06:52:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:40.432 06:52:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:40.432 06:52:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:40.432 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:40.432 06:52:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:40.432 06:52:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:40.432 06:52:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:40.432 06:52:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:40.432 06:52:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.432 06:52:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.432 06:52:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:40.432 06:52:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:40.432 06:52:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:40.432 06:52:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:40.432 06:52:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:40.432 06:52:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:40.432 06:52:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.432 06:52:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:40.432 06:52:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:40.432 06:52:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:40.432 06:52:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:40.432 06:52:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:40.432 06:52:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:40.432 06:52:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:40.432 06:52:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:40.432 06:52:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:40.432 06:52:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:40.432 06:52:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:40.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:15:40.432 00:15:40.432 --- 10.0.0.2 ping statistics --- 00:15:40.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.432 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:15:40.432 06:52:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:40.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:15:40.432 00:15:40.432 --- 10.0.0.1 ping statistics --- 00:15:40.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.432 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:15:40.432 06:52:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.432 06:52:54 -- nvmf/common.sh@410 -- # return 0 00:15:40.432 06:52:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:40.432 06:52:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.432 06:52:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:40.432 06:52:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.432 06:52:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:40.432 06:52:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:40.432 06:52:54 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:40.432 06:52:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:40.432 06:52:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:40.432 06:52:54 -- common/autotest_common.sh@10 -- # set +x 00:15:40.432 06:52:54 -- nvmf/common.sh@469 -- # nvmfpid=497412 00:15:40.432 06:52:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:40.433 06:52:54 -- nvmf/common.sh@470 -- # waitforlisten 497412 00:15:40.433 06:52:54 -- common/autotest_common.sh@819 -- # '[' -z 497412 ']' 00:15:40.433 06:52:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.433 06:52:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:40.433 06:52:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.433 06:52:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:40.433 06:52:54 -- common/autotest_common.sh@10 -- # set +x 00:15:40.433 [2024-05-15 06:52:54.549778] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:40.433 [2024-05-15 06:52:54.549848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.433 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.433 [2024-05-15 06:52:54.631808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.690 [2024-05-15 06:52:54.754396] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:40.690 [2024-05-15 06:52:54.754569] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.690 [2024-05-15 06:52:54.754588] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.690 [2024-05-15 06:52:54.754603] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.690 [2024-05-15 06:52:54.754645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.623 06:52:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:41.623 06:52:55 -- common/autotest_common.sh@852 -- # return 0 00:15:41.623 06:52:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:41.623 06:52:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:41.623 06:52:55 -- common/autotest_common.sh@10 -- # set +x 00:15:41.623 06:52:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.624 06:52:55 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:41.624 06:52:55 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:41.624 06:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.624 06:52:55 -- common/autotest_common.sh@10 -- # set +x 00:15:41.624 [2024-05-15 06:52:55.609550] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.624 06:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.624 06:52:55 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:41.624 06:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.624 06:52:55 -- common/autotest_common.sh@10 -- # set +x 00:15:41.624 06:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.624 06:52:55 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.624 06:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.624 06:52:55 -- common/autotest_common.sh@10 -- # set +x 00:15:41.624 [2024-05-15 06:52:55.625705] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.624 06:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.624 06:52:55 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:41.624 06:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.624 06:52:55 -- common/autotest_common.sh@10 -- # set +x 00:15:41.624 06:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.624 06:52:55 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:41.624 06:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.624 06:52:55 -- common/autotest_common.sh@10 -- # set +x 00:15:41.624 malloc0 00:15:41.624 06:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.624 06:52:55 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:41.624 06:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:41.624 06:52:55 -- common/autotest_common.sh@10 -- # set +x 00:15:41.624 06:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:41.624 06:52:55 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:41.624 06:52:55 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:41.624 06:52:55 -- nvmf/common.sh@520 -- # config=() 00:15:41.624 06:52:55 -- nvmf/common.sh@520 -- # local subsystem config 00:15:41.624 06:52:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:41.624 06:52:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:41.624 { 00:15:41.624 "params": { 00:15:41.624 "name": "Nvme$subsystem", 00:15:41.624 "trtype": "$TEST_TRANSPORT", 00:15:41.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:41.624 "adrfam": "ipv4", 00:15:41.624 "trsvcid": "$NVMF_PORT", 00:15:41.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:41.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:41.624 "hdgst": ${hdgst:-false}, 00:15:41.624 "ddgst": ${ddgst:-false} 00:15:41.624 }, 00:15:41.624 "method": "bdev_nvme_attach_controller" 00:15:41.624 } 00:15:41.624 EOF 00:15:41.624 )") 00:15:41.624 06:52:55 -- nvmf/common.sh@542 -- # cat 00:15:41.624 06:52:55 -- nvmf/common.sh@544 -- # jq . 00:15:41.624 06:52:55 -- nvmf/common.sh@545 -- # IFS=, 00:15:41.624 06:52:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:41.624 "params": { 00:15:41.624 "name": "Nvme1", 00:15:41.624 "trtype": "tcp", 00:15:41.624 "traddr": "10.0.0.2", 00:15:41.624 "adrfam": "ipv4", 00:15:41.624 "trsvcid": "4420", 00:15:41.624 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.624 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:41.624 "hdgst": false, 00:15:41.624 "ddgst": false 00:15:41.624 }, 00:15:41.624 "method": "bdev_nvme_attach_controller" 00:15:41.624 }' 00:15:41.624 [2024-05-15 06:52:55.698127] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:41.624 [2024-05-15 06:52:55.698208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid497571 ] 00:15:41.624 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.624 [2024-05-15 06:52:55.771862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.882 [2024-05-15 06:52:55.890193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.882 Running I/O for 10 seconds... 00:15:54.083 00:15:54.083 Latency(us) 00:15:54.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.083 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:54.083 Verification LBA range: start 0x0 length 0x1000 00:15:54.083 Nvme1n1 : 10.01 9013.42 70.42 0.00 0.00 14167.43 1881.13 23495.87 00:15:54.083 =================================================================================================================== 00:15:54.083 Total : 9013.42 70.42 0.00 0.00 14167.43 1881.13 23495.87 00:15:54.083 06:53:06 -- target/zcopy.sh@39 -- # perfpid=498805 00:15:54.083 06:53:06 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:54.083 06:53:06 -- common/autotest_common.sh@10 -- # set +x 00:15:54.083 06:53:06 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:54.083 06:53:06 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:54.083 06:53:06 -- nvmf/common.sh@520 -- # config=() 00:15:54.083 06:53:06 -- nvmf/common.sh@520 -- # local subsystem config 00:15:54.083 06:53:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:54.083 06:53:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:54.083 { 00:15:54.083 "params": { 00:15:54.083 "name": "Nvme$subsystem", 00:15:54.083 "trtype": "$TEST_TRANSPORT", 00:15:54.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.083 "adrfam": "ipv4", 00:15:54.083 "trsvcid": "$NVMF_PORT", 00:15:54.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.083 "hdgst": ${hdgst:-false}, 00:15:54.083 "ddgst": ${ddgst:-false} 00:15:54.083 }, 00:15:54.083 "method": "bdev_nvme_attach_controller" 00:15:54.083 } 00:15:54.083 EOF 00:15:54.083 )") 00:15:54.083 06:53:06 -- nvmf/common.sh@542 -- # cat 00:15:54.083 [2024-05-15 06:53:06.409693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.083 [2024-05-15 06:53:06.409742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.083 06:53:06 -- nvmf/common.sh@544 -- # jq . 00:15:54.083 06:53:06 -- nvmf/common.sh@545 -- # IFS=, 00:15:54.083 06:53:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:54.083 "params": { 00:15:54.083 "name": "Nvme1", 00:15:54.083 "trtype": "tcp", 00:15:54.083 "traddr": "10.0.0.2", 00:15:54.083 "adrfam": "ipv4", 00:15:54.083 "trsvcid": "4420", 00:15:54.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:54.083 "hdgst": false, 00:15:54.083 "ddgst": false 00:15:54.083 }, 00:15:54.083 "method": "bdev_nvme_attach_controller" 00:15:54.083 }' 00:15:54.083 [2024-05-15 06:53:06.417653] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.083 [2024-05-15 06:53:06.417679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.083 [2024-05-15 06:53:06.425674] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.083 [2024-05-15 06:53:06.425700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.083 [2024-05-15 06:53:06.433693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.083 [2024-05-15 06:53:06.433717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.083 [2024-05-15 06:53:06.439213] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:54.083 [2024-05-15 06:53:06.439287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498805 ] 00:15:54.083 [2024-05-15 06:53:06.441707] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.083 [2024-05-15 06:53:06.441730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.083 [2024-05-15 06:53:06.449741] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.083 [2024-05-15 06:53:06.449766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.083 [2024-05-15 06:53:06.457747] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.083 [2024-05-15 06:53:06.457767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.083 [2024-05-15 06:53:06.465782] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.083 [2024-05-15 06:53:06.465806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.083 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.083 [2024-05-15 06:53:06.473803] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.473827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.481824] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.481848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.489846] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.489870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.497871] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.497895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.505878] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.505898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.510242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.084 [2024-05-15 06:53:06.513938] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.513966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.521982] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.522022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.529967] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.529992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.537988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.538012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.546009] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.546032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.554024] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.554048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.562045] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.562069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.570075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.570101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.578131] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.578173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.586113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.586139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.594133] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.594157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.602155] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.602179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.610172] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.610194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.618204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.618228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.623269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.084 [2024-05-15 06:53:06.626239] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.626264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.634261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.634285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.642318] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.642359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.650333] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.650388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.658361] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.658404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.666378] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.666421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.674405] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.674452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.682424] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.682466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.690411] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.690435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.698461] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.698497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.706490] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.706531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.714508] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.714550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.722497] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.722521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.730518] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.730542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.738550] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.738580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.746571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.746598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.754595] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.754621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.762617] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.762644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.770644] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.770671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.778668] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.778703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.786688] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.786713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.794720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.794756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.802733] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.802758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 Running I/O for 5 seconds... 00:15:54.084 [2024-05-15 06:53:06.810756] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.810781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.824057] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.824086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.834296] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.834327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.846146] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.846174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.856587] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.856614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.867548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.867579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.877508] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.877538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.888790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.888821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.899729] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.899760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.910704] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.910734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.920859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.920887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.931793] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.931819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.941458] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.941485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.084 [2024-05-15 06:53:06.951690] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.084 [2024-05-15 06:53:06.951717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:06.961677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:06.961704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:06.972106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:06.972140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:06.984519] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:06.984547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:06.995288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:06.995314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.004034] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.004060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.014346] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.014373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.023462] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.023489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.033067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.033094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.043395] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.043423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.053286] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.053312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.063664] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.063691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.073695] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.073722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.083974] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.084000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.093631] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.093657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.104251] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.104277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.113952] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.113979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.124584] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.124610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.133627] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.133654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.144259] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.144286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.155950] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.155976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.166824] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.166858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.176477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.176505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.187034] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.187061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.196619] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.196646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.207275] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.207301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.217038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.217065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.227123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.227150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.237235] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.237262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.247021] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.247049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.256507] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.256533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.266518] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.266545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.277036] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.277064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.286290] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.286316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.297066] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.297093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.307566] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.307593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.318026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.318053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.327530] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.327557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.338217] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.338244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.348008] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.348035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.357736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.357762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.367082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.367109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.377293] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.377320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.387342] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.387369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.397007] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.397034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.406953] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.406992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.416656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.416684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.427314] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.427341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.436718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.436745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.446560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.446587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.456849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.456876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.467335] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.467362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.477001] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.477028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.487497] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.487539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.497637] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.497664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.507778] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.507804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.085 [2024-05-15 06:53:07.517272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.085 [2024-05-15 06:53:07.517299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.527436] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.527477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.537814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.537841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.549598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.549625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.558370] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.558397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.568953] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.568981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.578399] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.578426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.588837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.588864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.600154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.600181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.608896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.608923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.619397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.619424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.629051] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.629078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.639767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.639794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.648998] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.649025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.659373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.659401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.668957] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.668984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.679170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.679196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.689126] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.689153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.698418] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.698444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.708476] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.708503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.718568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.718594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.727991] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.728018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.738620] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.738647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.748363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.748390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.759115] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.759141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.768757] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.768784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.778268] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.778295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.788358] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.788384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.798383] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.798409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.808429] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.808455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.818428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.818455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.828575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.828602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.837414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.837441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.848399] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.848426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.859898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.859925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.868787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.868813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.879562] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.879589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.889817] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.889844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.900151] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.900179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.909616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.909642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.920349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.920376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.930397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.930423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.941179] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.941206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.951106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.951133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.962155] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.962182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.971477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.971504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.981927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.981973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:07.993635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:07.993662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:08.002313] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:08.002351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:08.012993] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:08.013020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:08.022090] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:08.022118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:08.032133] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:08.032160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:08.044071] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:08.044098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:08.053252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:08.053283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:08.063950] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:08.063978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:08.072987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.086 [2024-05-15 06:53:08.073013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.086 [2024-05-15 06:53:08.083192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.083219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.093280] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.093306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.104902] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.104937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.113583] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.113616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.124279] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.124305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.133322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.133348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.143719] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.143745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.155444] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.155471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.164654] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.164681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.175123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.175150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.186035] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.186063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.196449] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.196476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.206828] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.206854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.216086] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.216113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.226470] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.226496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.235960] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.235987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.246489] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.246515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.257575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.257601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.266108] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.266134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.278255] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.278281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.287322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.287349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.297881] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.297908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.087 [2024-05-15 06:53:08.307486] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.087 [2024-05-15 06:53:08.307524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.345 [2024-05-15 06:53:08.317987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.345 [2024-05-15 06:53:08.318015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.345 [2024-05-15 06:53:08.329607] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.345 [2024-05-15 06:53:08.329635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.338331] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.338358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.350594] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.350621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.361548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.361575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.370221] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.370248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.381271] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.381298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.391731] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.391758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.401653] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.401679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.411230] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.411256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.421894] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.421921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.431485] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.431511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.441892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.441919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.451943] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.451971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.461637] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.461665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.471953] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.471980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.481220] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.481247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.491761] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.491788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.501837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.501871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.511716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.511743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.521535] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.521562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.531173] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.531200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.541294] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.541321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.551363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.551389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.561288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.561315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.346 [2024-05-15 06:53:08.571767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.346 [2024-05-15 06:53:08.571794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.581790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.581818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.591527] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.591554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.601379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.601405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.611522] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.611548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.623196] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.623223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.634425] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.634452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.643137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.643164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.653555] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.653581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.662629] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.662656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.673767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.673795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.682898] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.682947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.693200] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.693234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.703633] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.703659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.712580] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.712607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.723113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.723141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.733369] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.604 [2024-05-15 06:53:08.733395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.604 [2024-05-15 06:53:08.744747] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.605 [2024-05-15 06:53:08.744774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.605 [2024-05-15 06:53:08.753613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.605 [2024-05-15 06:53:08.753641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.605 [2024-05-15 06:53:08.764314] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.605 [2024-05-15 06:53:08.764341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.605 [2024-05-15 06:53:08.775305] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.605 [2024-05-15 06:53:08.775332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.605 [2024-05-15 06:53:08.785677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.605 [2024-05-15 06:53:08.785703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.605 [2024-05-15 06:53:08.796085] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.605 [2024-05-15 06:53:08.796112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.605 [2024-05-15 06:53:08.807585] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.605 [2024-05-15 06:53:08.807613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.605 [2024-05-15 06:53:08.816901] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.605 [2024-05-15 06:53:08.816938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.605 [2024-05-15 06:53:08.827723] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.605 [2024-05-15 06:53:08.827751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.605 [2024-05-15 06:53:08.837068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.605 [2024-05-15 06:53:08.837094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.848062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.848089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.860969] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.860996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.872074] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.872101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.881513] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.881540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.892615] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.892642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.901887] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.901914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.911878] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.911905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.922337] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.922365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.931323] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.931351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.942238] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.942265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.951721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.951748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.961698] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.961724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.972170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.972197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.982036] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.982063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:08.992426] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:08.992453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:09.002237] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:09.002265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:09.012506] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:09.012534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:09.023062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:09.023089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:09.034199] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:09.034226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:09.044523] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:09.044552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:09.057291] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:09.057318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:09.065810] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:09.065837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.863 [2024-05-15 06:53:09.076486] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.863 [2024-05-15 06:53:09.076514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.864 [2024-05-15 06:53:09.085456] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.864 [2024-05-15 06:53:09.085484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.864 [2024-05-15 06:53:09.098110] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.864 [2024-05-15 06:53:09.098138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.107774] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.107802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.117468] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.117509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.127831] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.127859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.137857] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.137885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.147763] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.147790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.157868] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.157895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.167580] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.167608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.177035] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.177062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.187872] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.187900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.199369] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.199397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.207767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.207794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.220096] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.220123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.229283] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.229310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.240000] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.240027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.249792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.249819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.260218] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.260245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.271237] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.271263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.280009] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.280034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.290059] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.290085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.299452] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.299478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.309344] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.309370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.321342] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.321369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.330093] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.330119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.340629] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.340655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.122 [2024-05-15 06:53:09.350634] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.122 [2024-05-15 06:53:09.350661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.380 [2024-05-15 06:53:09.361217] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.380 [2024-05-15 06:53:09.361244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.380 [2024-05-15 06:53:09.370623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.380 [2024-05-15 06:53:09.370664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.380 [2024-05-15 06:53:09.381067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.380 [2024-05-15 06:53:09.381094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.380 [2024-05-15 06:53:09.392925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.380 [2024-05-15 06:53:09.392963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.380 [2024-05-15 06:53:09.402077] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.380 [2024-05-15 06:53:09.402105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.380 [2024-05-15 06:53:09.412568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.380 [2024-05-15 06:53:09.412595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.380 [2024-05-15 06:53:09.422235] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.422262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.432356] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.432383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.443826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.443853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.452994] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.453021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.462775] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.462802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.473131] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.473157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.483700] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.483727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.493512] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.493538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.504137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.504163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.516166] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.516192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.526749] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.526774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.535159] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.535185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.546175] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.546201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.555814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.555841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.566018] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.566045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.575987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.576013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.586235] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.586262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.595215] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.595242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.605562] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.605589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.381 [2024-05-15 06:53:09.614806] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.381 [2024-05-15 06:53:09.614832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.624123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.624149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.634302] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.634329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.643579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.643607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.654722] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.654755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.663889] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.663916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.674537] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.674564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.683420] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.683447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.694385] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.694412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.705503] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.705530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.714185] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.714212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.724873] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.724900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.734312] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.734338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.744408] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.744435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.753831] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.753860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.764159] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.764186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.773923] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.773958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.784295] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.784321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.794271] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.794298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.804287] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.804313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.814176] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.814202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.824128] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.824155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.833918] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.833951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.843526] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.843560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.854045] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.854072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.864212] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.864239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.639 [2024-05-15 06:53:09.873859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.639 [2024-05-15 06:53:09.873886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:09.884114] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:09.884141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:09.896530] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:09.896557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:09.906372] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:09.906400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:09.916532] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:09.916558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:09.926793] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:09.926820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:09.939103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:09.939129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:09.948220] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:09.948247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:09.960136] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:09.960162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:09.969196] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:09.969223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:09.981187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:09.981214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:09.990270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:09.990297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.002716] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.002748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.011877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.011907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.022117] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.022146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.031150] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.031182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.041785] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.041821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.051143] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.051171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.061600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.061627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.071636] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.071664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.081579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.081606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.091970] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.091997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.101920] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.101955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.112111] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.112139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.122139] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.122166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.898 [2024-05-15 06:53:10.131433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.898 [2024-05-15 06:53:10.131460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.157 [2024-05-15 06:53:10.141735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.157 [2024-05-15 06:53:10.141762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.157 [2024-05-15 06:53:10.150780] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.157 [2024-05-15 06:53:10.150808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.157 [2024-05-15 06:53:10.161167] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.157 [2024-05-15 06:53:10.161205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.157 [2024-05-15 06:53:10.171832] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.157 [2024-05-15 06:53:10.171860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.157 [2024-05-15 06:53:10.181683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.157 [2024-05-15 06:53:10.181711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.157 [2024-05-15 06:53:10.192332] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.192360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.201575] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.201614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.212393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.212420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.224900] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.224927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.234338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.234387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.245067] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.245094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.255020] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.255046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.264950] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.264977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.275196] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.275223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.287192] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.287219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.296075] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.296102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.306442] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.306469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.316908] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.316942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.328576] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.328602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.337217] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.337244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.347714] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.347742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.357171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.357198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.367393] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.367419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.379328] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.379354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.158 [2024-05-15 06:53:10.387944] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.158 [2024-05-15 06:53:10.387970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.400909] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.400943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.410329] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.410356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.420787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.420814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.430571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.430598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.441900] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.441927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.453307] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.453333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.463477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.463504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.473605] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.473632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.484420] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.484446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.493910] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.493944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.503714] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.503741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.513318] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.513346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.523817] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.523844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.533843] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.533870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.546165] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.546192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.557121] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.557148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.565599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.565625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.576431] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.416 [2024-05-15 06:53:10.576458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.416 [2024-05-15 06:53:10.585801] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.417 [2024-05-15 06:53:10.585828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.417 [2024-05-15 06:53:10.595349] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.417 [2024-05-15 06:53:10.595376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.417 [2024-05-15 06:53:10.605550] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.417 [2024-05-15 06:53:10.605577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.417 [2024-05-15 06:53:10.615571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.417 [2024-05-15 06:53:10.615598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.417 [2024-05-15 06:53:10.625188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.417 [2024-05-15 06:53:10.625215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.417 [2024-05-15 06:53:10.635878] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.417 [2024-05-15 06:53:10.635905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.417 [2024-05-15 06:53:10.647565] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.417 [2024-05-15 06:53:10.647592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.656161] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.656187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.668729] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.668756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.677666] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.677692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.689645] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.689672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.698172] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.698198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.710597] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.710623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.722084] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.722111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.730814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.730840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.742171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.742198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.752181] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.752207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.761882] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.761908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.771858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.771885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.781696] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.781723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.792103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.792130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.803574] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.803601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.812257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.812283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.823137] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.823164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.832659] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.832686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.842985] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.843012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.852799] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.852827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.863225] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.863252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.873251] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.873278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.883486] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.883512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.675 [2024-05-15 06:53:10.895409] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.675 [2024-05-15 06:53:10.895435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.676 [2024-05-15 06:53:10.904277] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.676 [2024-05-15 06:53:10.904304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:10.916713] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:10.916740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:10.928004] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:10.928031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:10.936616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:10.936644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:10.947501] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:10.947528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:10.966167] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:10.966196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:10.976384] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:10.976411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:10.985709] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:10.985736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:10.996452] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:10.996478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.006079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.006106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.016039] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.016066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.026841] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.026868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.037171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.037197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.049250] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.049277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.058490] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.058517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.068483] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.068510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.078145] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.078171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.088600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.088627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.098008] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.098035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.108532] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.108558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.117862] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.117889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.128226] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.934 [2024-05-15 06:53:11.128254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.934 [2024-05-15 06:53:11.137440] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.935 [2024-05-15 06:53:11.137467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.935 [2024-05-15 06:53:11.147938] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.935 [2024-05-15 06:53:11.147965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.935 [2024-05-15 06:53:11.159671] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.935 [2024-05-15 06:53:11.159697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:56.935 [2024-05-15 06:53:11.168429] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:56.935 [2024-05-15 06:53:11.168455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.177725] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.177753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.188401] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.188428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.197793] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.197820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.208397] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.208436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.218139] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.218167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.228682] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.228708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.238380] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.238421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.249185] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.249212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.261086] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.261114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.269682] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.269708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.280519] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.280546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.291853] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.291880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.300504] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.300532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.311387] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.311415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.321447] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.193 [2024-05-15 06:53:11.321474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.193 [2024-05-15 06:53:11.331528] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.194 [2024-05-15 06:53:11.331555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.194 [2024-05-15 06:53:11.344189] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.194 [2024-05-15 06:53:11.344216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.194 [2024-05-15 06:53:11.355179] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.194 [2024-05-15 06:53:11.355206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.194 [2024-05-15 06:53:11.364061] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.194 [2024-05-15 06:53:11.364088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.194 [2024-05-15 06:53:11.374612] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.194 [2024-05-15 06:53:11.374639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.194 [2024-05-15 06:53:11.383726] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.194 [2024-05-15 06:53:11.383753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.194 [2024-05-15 06:53:11.394194] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.194 [2024-05-15 06:53:11.394221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.194 [2024-05-15 06:53:11.404489] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.194 [2024-05-15 06:53:11.404523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.194 [2024-05-15 06:53:11.414244] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.194 [2024-05-15 06:53:11.414280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.194 [2024-05-15 06:53:11.424749] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.194 [2024-05-15 06:53:11.424776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.434530] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.434558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.444801] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.444828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.455315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.455342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.465576] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.465603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.475951] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.475979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.486025] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.486052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.495752] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.495779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.505814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.505840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.515141] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.515168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.525309] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.525335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.535350] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.535375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.545074] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.545100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.555315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.555342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.567548] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.567575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.576864] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.576891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.586855] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.586882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.596764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.596797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.607204] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.607231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.616649] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.616676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.626656] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.626683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.452 [2024-05-15 06:53:11.636600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.452 [2024-05-15 06:53:11.636626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.453 [2024-05-15 06:53:11.646992] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.453 [2024-05-15 06:53:11.647018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.453 [2024-05-15 06:53:11.656444] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.453 [2024-05-15 06:53:11.656470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.453 [2024-05-15 06:53:11.667542] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.453 [2024-05-15 06:53:11.667569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.453 [2024-05-15 06:53:11.676340] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.453 [2024-05-15 06:53:11.676367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.453 [2024-05-15 06:53:11.686736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.453 [2024-05-15 06:53:11.686763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.696832] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.696858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.707417] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.707443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.717830] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.717857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.727921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.727969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.737499] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.737526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.747616] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.747642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.757862] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.757890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.768944] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.768972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.779462] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.779488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.788858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.788891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.799123] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.799150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.810203] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.810230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.818676] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.818701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.827619] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.827645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 00:15:57.714 Latency(us) 00:15:57.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.714 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:57.714 Nvme1n1 : 5.01 12624.26 98.63 0.00 0.00 10124.83 3021.94 25049.32 00:15:57.714 =================================================================================================================== 00:15:57.714 Total : 12624.26 98.63 0.00 0.00 10124.83 3021.94 25049.32 00:15:57.714 [2024-05-15 06:53:11.833521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.833548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.841532] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.841557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.849560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.849586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.857622] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.857669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.865655] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.865706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.873676] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.873726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.881702] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.881751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.889714] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.889763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.897742] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.897791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.905764] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.905813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.913790] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.913840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.921813] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.921863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.929838] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.929889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.937868] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.937920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.714 [2024-05-15 06:53:11.945888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.714 [2024-05-15 06:53:11.945949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.004 [2024-05-15 06:53:11.953903] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.004 [2024-05-15 06:53:11.953960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.004 [2024-05-15 06:53:11.961940] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.004 [2024-05-15 06:53:11.961989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.004 [2024-05-15 06:53:11.969962] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.004 [2024-05-15 06:53:11.970031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.004 [2024-05-15 06:53:11.977925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.004 [2024-05-15 06:53:11.977961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.004 [2024-05-15 06:53:11.985946] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.004 [2024-05-15 06:53:11.985970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.004 [2024-05-15 06:53:11.993968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.004 [2024-05-15 06:53:11.994004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.004 [2024-05-15 06:53:12.002002] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.004 [2024-05-15 06:53:12.002023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.004 [2024-05-15 06:53:12.010000] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.010021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.018089] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.018139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.026107] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.026155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.034104] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.034133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.042114] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.042136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.050131] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.050152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.058151] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.058171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.066171] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.066192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.074255] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.074304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.082277] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.082324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.090251] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.090288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.098288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.098313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 [2024-05-15 06:53:12.106310] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.005 [2024-05-15 06:53:12.106335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (498805) - No such process 00:15:58.005 06:53:12 -- target/zcopy.sh@49 -- # wait 498805 00:15:58.005 06:53:12 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:58.005 06:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.005 06:53:12 -- common/autotest_common.sh@10 -- # set +x 00:15:58.005 06:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.005 06:53:12 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:58.005 06:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.005 06:53:12 -- common/autotest_common.sh@10 -- # set +x 00:15:58.005 delay0 00:15:58.005 06:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.005 06:53:12 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:58.005 06:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.005 06:53:12 -- common/autotest_common.sh@10 -- # set +x 00:15:58.005 06:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.005 06:53:12 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:58.005 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.262 [2024-05-15 06:53:12.230602] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:04.817 Initializing NVMe Controllers 00:16:04.817 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:04.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:04.817 Initialization complete. Launching workers. 00:16:04.817 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 44 00:16:04.817 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 331, failed to submit 33 00:16:04.817 success 102, unsuccess 229, failed 0 00:16:04.817 06:53:18 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:04.817 06:53:18 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:04.817 06:53:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:04.817 06:53:18 -- nvmf/common.sh@116 -- # sync 00:16:04.817 06:53:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:04.817 06:53:18 -- nvmf/common.sh@119 -- # set +e 00:16:04.817 06:53:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:04.817 06:53:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:04.817 rmmod nvme_tcp 00:16:04.817 rmmod nvme_fabrics 00:16:04.817 rmmod nvme_keyring 00:16:04.817 06:53:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:04.817 06:53:18 -- nvmf/common.sh@123 -- # set -e 00:16:04.817 06:53:18 -- nvmf/common.sh@124 -- # return 0 00:16:04.817 06:53:18 -- nvmf/common.sh@477 -- # '[' -n 497412 ']' 00:16:04.817 06:53:18 -- nvmf/common.sh@478 -- # killprocess 497412 00:16:04.817 06:53:18 -- common/autotest_common.sh@926 -- # '[' -z 497412 ']' 00:16:04.817 06:53:18 -- common/autotest_common.sh@930 -- # kill -0 497412 00:16:04.817 06:53:18 -- common/autotest_common.sh@931 -- # uname 00:16:04.817 06:53:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:04.817 06:53:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 497412 00:16:04.817 06:53:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:04.817 06:53:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:04.817 06:53:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 497412' 00:16:04.817 killing process with pid 497412 00:16:04.817 06:53:18 -- common/autotest_common.sh@945 -- # kill 497412 00:16:04.817 06:53:18 -- common/autotest_common.sh@950 -- # wait 497412 00:16:04.817 06:53:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:04.817 06:53:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:04.817 06:53:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:04.817 06:53:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:04.817 06:53:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:04.817 06:53:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.817 06:53:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.817 06:53:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.718 06:53:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:06.718 00:16:06.718 real 0m28.841s 00:16:06.718 user 0m41.690s 00:16:06.718 sys 0m8.814s 00:16:06.718 06:53:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.718 06:53:20 -- common/autotest_common.sh@10 -- # set +x 00:16:06.718 ************************************ 00:16:06.718 END TEST nvmf_zcopy 00:16:06.718 ************************************ 00:16:06.718 06:53:20 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:06.718 06:53:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:06.718 06:53:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:06.718 06:53:20 -- common/autotest_common.sh@10 -- # set +x 00:16:06.718 ************************************ 00:16:06.718 START TEST nvmf_nmic 00:16:06.718 ************************************ 00:16:06.718 06:53:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:06.718 * Looking for test storage... 00:16:06.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.718 06:53:20 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.718 06:53:20 -- nvmf/common.sh@7 -- # uname -s 00:16:06.718 06:53:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.718 06:53:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.718 06:53:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.718 06:53:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.718 06:53:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.718 06:53:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.718 06:53:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.718 06:53:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.718 06:53:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.718 06:53:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.718 06:53:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.718 06:53:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.718 06:53:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.718 06:53:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.718 06:53:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.718 06:53:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.718 06:53:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.718 06:53:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.718 06:53:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.718 06:53:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.719 06:53:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.719 06:53:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.719 06:53:20 -- paths/export.sh@5 -- # export PATH 00:16:06.719 06:53:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.719 06:53:20 -- nvmf/common.sh@46 -- # : 0 00:16:06.719 06:53:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:06.719 06:53:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:06.719 06:53:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:06.719 06:53:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.719 06:53:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.719 06:53:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:06.719 06:53:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:06.719 06:53:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:06.719 06:53:20 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.719 06:53:20 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.719 06:53:20 -- target/nmic.sh@14 -- # nvmftestinit 00:16:06.719 06:53:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:06.719 06:53:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.719 06:53:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:06.719 06:53:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:06.719 06:53:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:06.719 06:53:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.719 06:53:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.719 06:53:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.719 06:53:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:06.719 06:53:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:06.719 06:53:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:06.719 06:53:20 -- common/autotest_common.sh@10 -- # set +x 00:16:09.251 06:53:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:09.251 06:53:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:09.251 06:53:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:09.251 06:53:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:09.251 06:53:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:09.251 06:53:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:09.251 06:53:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:09.251 06:53:23 -- nvmf/common.sh@294 -- # net_devs=() 00:16:09.251 06:53:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:09.251 06:53:23 -- nvmf/common.sh@295 -- # e810=() 00:16:09.251 06:53:23 -- nvmf/common.sh@295 -- # local -ga e810 00:16:09.251 06:53:23 -- nvmf/common.sh@296 -- # x722=() 00:16:09.251 06:53:23 -- nvmf/common.sh@296 -- # local -ga x722 00:16:09.251 06:53:23 -- nvmf/common.sh@297 -- # mlx=() 00:16:09.251 06:53:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:09.251 06:53:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.251 06:53:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.251 06:53:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.251 06:53:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.251 06:53:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.251 06:53:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.251 06:53:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.251 06:53:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.251 06:53:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.251 06:53:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.251 06:53:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.251 06:53:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:09.251 06:53:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:09.251 06:53:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:09.251 06:53:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:09.251 06:53:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:09.251 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:09.251 06:53:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:09.251 06:53:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:09.251 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:09.251 06:53:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:09.251 06:53:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:09.251 06:53:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.251 06:53:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:09.251 06:53:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.251 06:53:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:09.251 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:09.251 06:53:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.251 06:53:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:09.251 06:53:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.251 06:53:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:09.251 06:53:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.251 06:53:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:09.251 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:09.251 06:53:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.251 06:53:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:09.251 06:53:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:09.251 06:53:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:09.251 06:53:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.251 06:53:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.251 06:53:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.251 06:53:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:09.251 06:53:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.251 06:53:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.251 06:53:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:09.251 06:53:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.251 06:53:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.251 06:53:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:09.251 06:53:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:09.251 06:53:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.251 06:53:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.251 06:53:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.251 06:53:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.251 06:53:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:09.251 06:53:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.251 06:53:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.251 06:53:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.251 06:53:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:09.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:16:09.251 00:16:09.251 --- 10.0.0.2 ping statistics --- 00:16:09.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.251 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:16:09.251 06:53:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:16:09.251 00:16:09.251 --- 10.0.0.1 ping statistics --- 00:16:09.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.251 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:16:09.251 06:53:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.251 06:53:23 -- nvmf/common.sh@410 -- # return 0 00:16:09.251 06:53:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:09.251 06:53:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.251 06:53:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:09.251 06:53:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.251 06:53:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:09.251 06:53:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:09.251 06:53:23 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:09.251 06:53:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:09.251 06:53:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:09.251 06:53:23 -- common/autotest_common.sh@10 -- # set +x 00:16:09.251 06:53:23 -- nvmf/common.sh@469 -- # nvmfpid=502532 00:16:09.251 06:53:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:09.251 06:53:23 -- nvmf/common.sh@470 -- # waitforlisten 502532 00:16:09.251 06:53:23 -- common/autotest_common.sh@819 -- # '[' -z 502532 ']' 00:16:09.251 06:53:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.251 06:53:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:09.251 06:53:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.252 06:53:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:09.252 06:53:23 -- common/autotest_common.sh@10 -- # set +x 00:16:09.252 [2024-05-15 06:53:23.481517] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:09.252 [2024-05-15 06:53:23.481597] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.509 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.509 [2024-05-15 06:53:23.561633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:09.509 [2024-05-15 06:53:23.683973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:09.510 [2024-05-15 06:53:23.684128] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.510 [2024-05-15 06:53:23.684147] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.510 [2024-05-15 06:53:23.684162] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.510 [2024-05-15 06:53:23.684220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.510 [2024-05-15 06:53:23.684274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.510 [2024-05-15 06:53:23.684323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.510 [2024-05-15 06:53:23.684326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.442 06:53:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:10.442 06:53:24 -- common/autotest_common.sh@852 -- # return 0 00:16:10.442 06:53:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:10.442 06:53:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:10.442 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.442 06:53:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.442 06:53:24 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:10.442 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.442 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.442 [2024-05-15 06:53:24.434258] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.442 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.442 06:53:24 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:10.442 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.442 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.442 Malloc0 00:16:10.442 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.442 06:53:24 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:10.442 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.442 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.442 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.442 06:53:24 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:10.442 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.442 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.442 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.442 06:53:24 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.442 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.442 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.442 [2024-05-15 06:53:24.485179] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.442 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.442 06:53:24 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:10.442 test case1: single bdev can't be used in multiple subsystems 00:16:10.442 06:53:24 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:10.442 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.442 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.442 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.442 06:53:24 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:10.442 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.442 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.442 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.442 06:53:24 -- target/nmic.sh@28 -- # nmic_status=0 00:16:10.442 06:53:24 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:10.442 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.442 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.442 [2024-05-15 06:53:24.509043] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:10.442 [2024-05-15 06:53:24.509072] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:10.442 [2024-05-15 06:53:24.509087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.442 request: 00:16:10.442 { 00:16:10.442 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:10.442 "namespace": { 00:16:10.442 "bdev_name": "Malloc0" 00:16:10.442 }, 00:16:10.442 "method": "nvmf_subsystem_add_ns", 00:16:10.442 "req_id": 1 00:16:10.442 } 00:16:10.442 Got JSON-RPC error response 00:16:10.442 response: 00:16:10.442 { 00:16:10.442 "code": -32602, 00:16:10.442 "message": "Invalid parameters" 00:16:10.442 } 00:16:10.442 06:53:24 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:10.442 06:53:24 -- target/nmic.sh@29 -- # nmic_status=1 00:16:10.442 06:53:24 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:10.442 06:53:24 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:10.442 Adding namespace failed - expected result. 00:16:10.442 06:53:24 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:10.442 test case2: host connect to nvmf target in multiple paths 00:16:10.442 06:53:24 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:10.442 06:53:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.442 06:53:24 -- common/autotest_common.sh@10 -- # set +x 00:16:10.442 [2024-05-15 06:53:24.517169] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:10.442 06:53:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.442 06:53:24 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:11.004 06:53:25 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:11.568 06:53:25 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:11.568 06:53:25 -- common/autotest_common.sh@1177 -- # local i=0 00:16:11.568 06:53:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.568 06:53:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:11.568 06:53:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:13.464 06:53:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:13.464 06:53:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:13.464 06:53:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:13.464 06:53:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:13.464 06:53:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.464 06:53:27 -- common/autotest_common.sh@1187 -- # return 0 00:16:13.464 06:53:27 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:13.464 [global] 00:16:13.464 thread=1 00:16:13.464 invalidate=1 00:16:13.464 rw=write 00:16:13.464 time_based=1 00:16:13.464 runtime=1 00:16:13.464 ioengine=libaio 00:16:13.464 direct=1 00:16:13.464 bs=4096 00:16:13.464 iodepth=1 00:16:13.464 norandommap=0 00:16:13.464 numjobs=1 00:16:13.464 00:16:13.464 verify_dump=1 00:16:13.464 verify_backlog=512 00:16:13.464 verify_state_save=0 00:16:13.464 do_verify=1 00:16:13.464 verify=crc32c-intel 00:16:13.464 [job0] 00:16:13.464 filename=/dev/nvme0n1 00:16:13.464 Could not set queue depth (nvme0n1) 00:16:13.722 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.722 fio-3.35 00:16:13.722 Starting 1 thread 00:16:15.094 00:16:15.094 job0: (groupid=0, jobs=1): err= 0: pid=503188: Wed May 15 06:53:28 2024 00:16:15.094 read: IOPS=19, BW=79.8KiB/s (81.8kB/s)(80.0KiB/1002msec) 00:16:15.094 slat (nsec): min=14925, max=32720, avg=22889.80, stdev=8183.83 00:16:15.094 clat (usec): min=40693, max=41046, avg=40953.85, stdev=76.01 00:16:15.094 lat (usec): min=40715, max=41063, avg=40976.74, stdev=73.44 00:16:15.094 clat percentiles (usec): 00:16:15.094 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:15.094 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:15.094 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:15.094 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:15.094 | 99.99th=[41157] 00:16:15.094 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:16:15.094 slat (usec): min=7, max=29667, avg=82.50, stdev=1310.10 00:16:15.094 clat (usec): min=225, max=577, avg=266.99, stdev=35.92 00:16:15.094 lat (usec): min=243, max=30034, avg=349.50, stdev=1315.03 00:16:15.094 clat percentiles (usec): 00:16:15.094 | 1.00th=[ 229], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:16:15.094 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 265], 00:16:15.094 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 326], 00:16:15.094 | 99.00th=[ 367], 99.50th=[ 404], 99.90th=[ 578], 99.95th=[ 578], 00:16:15.094 | 99.99th=[ 578] 00:16:15.094 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:15.094 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:15.094 lat (usec) : 250=49.25%, 500=46.80%, 750=0.19% 00:16:15.094 lat (msec) : 50=3.76% 00:16:15.094 cpu : usr=0.60%, sys=1.20%, ctx=534, majf=0, minf=2 00:16:15.094 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:15.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.094 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.094 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:15.094 00:16:15.094 Run status group 0 (all jobs): 00:16:15.094 READ: bw=79.8KiB/s (81.8kB/s), 79.8KiB/s-79.8KiB/s (81.8kB/s-81.8kB/s), io=80.0KiB (81.9kB), run=1002-1002msec 00:16:15.094 WRITE: bw=2044KiB/s (2093kB/s), 2044KiB/s-2044KiB/s (2093kB/s-2093kB/s), io=2048KiB (2097kB), run=1002-1002msec 00:16:15.094 00:16:15.094 Disk stats (read/write): 00:16:15.094 nvme0n1: ios=43/512, merge=0/0, ticks=1683/137, in_queue=1820, util=98.70% 00:16:15.094 06:53:28 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:15.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:15.094 06:53:29 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:15.094 06:53:29 -- common/autotest_common.sh@1198 -- # local i=0 00:16:15.094 06:53:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:15.094 06:53:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.094 06:53:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:15.094 06:53:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:15.094 06:53:29 -- common/autotest_common.sh@1210 -- # return 0 00:16:15.094 06:53:29 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:15.094 06:53:29 -- target/nmic.sh@53 -- # nvmftestfini 00:16:15.094 06:53:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:15.094 06:53:29 -- nvmf/common.sh@116 -- # sync 00:16:15.094 06:53:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:15.094 06:53:29 -- nvmf/common.sh@119 -- # set +e 00:16:15.094 06:53:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:15.094 06:53:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:15.094 rmmod nvme_tcp 00:16:15.094 rmmod nvme_fabrics 00:16:15.094 rmmod nvme_keyring 00:16:15.094 06:53:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:15.094 06:53:29 -- nvmf/common.sh@123 -- # set -e 00:16:15.094 06:53:29 -- nvmf/common.sh@124 -- # return 0 00:16:15.094 06:53:29 -- nvmf/common.sh@477 -- # '[' -n 502532 ']' 00:16:15.094 06:53:29 -- nvmf/common.sh@478 -- # killprocess 502532 00:16:15.094 06:53:29 -- common/autotest_common.sh@926 -- # '[' -z 502532 ']' 00:16:15.094 06:53:29 -- common/autotest_common.sh@930 -- # kill -0 502532 00:16:15.094 06:53:29 -- common/autotest_common.sh@931 -- # uname 00:16:15.094 06:53:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:15.094 06:53:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 502532 00:16:15.094 06:53:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:15.094 06:53:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:15.094 06:53:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 502532' 00:16:15.094 killing process with pid 502532 00:16:15.094 06:53:29 -- common/autotest_common.sh@945 -- # kill 502532 00:16:15.094 06:53:29 -- common/autotest_common.sh@950 -- # wait 502532 00:16:15.354 06:53:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:15.354 06:53:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:15.354 06:53:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:15.354 06:53:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.354 06:53:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:15.354 06:53:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.354 06:53:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.354 06:53:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.886 06:53:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:17.886 00:16:17.886 real 0m10.776s 00:16:17.886 user 0m24.286s 00:16:17.886 sys 0m2.554s 00:16:17.886 06:53:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.886 06:53:31 -- common/autotest_common.sh@10 -- # set +x 00:16:17.886 ************************************ 00:16:17.886 END TEST nvmf_nmic 00:16:17.886 ************************************ 00:16:17.886 06:53:31 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:17.886 06:53:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:17.886 06:53:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:17.886 06:53:31 -- common/autotest_common.sh@10 -- # set +x 00:16:17.886 ************************************ 00:16:17.886 START TEST nvmf_fio_target 00:16:17.886 ************************************ 00:16:17.886 06:53:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:17.886 * Looking for test storage... 00:16:17.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.886 06:53:31 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.886 06:53:31 -- nvmf/common.sh@7 -- # uname -s 00:16:17.886 06:53:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.886 06:53:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.886 06:53:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.886 06:53:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.886 06:53:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.886 06:53:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.886 06:53:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.886 06:53:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.886 06:53:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.886 06:53:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.886 06:53:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.886 06:53:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.886 06:53:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.886 06:53:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.886 06:53:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.886 06:53:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.886 06:53:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.886 06:53:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.886 06:53:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.886 06:53:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.886 06:53:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.886 06:53:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.886 06:53:31 -- paths/export.sh@5 -- # export PATH 00:16:17.886 06:53:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.886 06:53:31 -- nvmf/common.sh@46 -- # : 0 00:16:17.886 06:53:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:17.886 06:53:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:17.886 06:53:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:17.886 06:53:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.886 06:53:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.886 06:53:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:17.886 06:53:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:17.886 06:53:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:17.886 06:53:31 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.886 06:53:31 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.886 06:53:31 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.886 06:53:31 -- target/fio.sh@16 -- # nvmftestinit 00:16:17.886 06:53:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:17.886 06:53:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.886 06:53:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:17.886 06:53:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:17.886 06:53:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:17.886 06:53:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.886 06:53:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.886 06:53:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.886 06:53:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:17.886 06:53:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:17.886 06:53:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:17.886 06:53:31 -- common/autotest_common.sh@10 -- # set +x 00:16:20.448 06:53:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:20.448 06:53:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:20.448 06:53:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:20.448 06:53:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:20.448 06:53:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:20.448 06:53:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:20.448 06:53:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:20.448 06:53:34 -- nvmf/common.sh@294 -- # net_devs=() 00:16:20.448 06:53:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:20.448 06:53:34 -- nvmf/common.sh@295 -- # e810=() 00:16:20.448 06:53:34 -- nvmf/common.sh@295 -- # local -ga e810 00:16:20.448 06:53:34 -- nvmf/common.sh@296 -- # x722=() 00:16:20.448 06:53:34 -- nvmf/common.sh@296 -- # local -ga x722 00:16:20.448 06:53:34 -- nvmf/common.sh@297 -- # mlx=() 00:16:20.448 06:53:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:20.448 06:53:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.448 06:53:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.448 06:53:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.448 06:53:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.448 06:53:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.448 06:53:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.448 06:53:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.448 06:53:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.448 06:53:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.448 06:53:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.448 06:53:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.448 06:53:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:20.448 06:53:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:20.448 06:53:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:20.448 06:53:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:20.448 06:53:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:20.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:20.448 06:53:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:20.448 06:53:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:20.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:20.448 06:53:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:20.448 06:53:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:20.448 06:53:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:20.448 06:53:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.448 06:53:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:20.448 06:53:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.448 06:53:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:20.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:20.449 06:53:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.449 06:53:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:20.449 06:53:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.449 06:53:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:20.449 06:53:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.449 06:53:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:20.449 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:20.449 06:53:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.449 06:53:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:20.449 06:53:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:20.449 06:53:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:20.449 06:53:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:20.449 06:53:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:20.449 06:53:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.449 06:53:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.449 06:53:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.449 06:53:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:20.449 06:53:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.449 06:53:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.449 06:53:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:20.449 06:53:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.449 06:53:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.449 06:53:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:20.449 06:53:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:20.449 06:53:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.449 06:53:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.449 06:53:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.449 06:53:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.449 06:53:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:20.449 06:53:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.449 06:53:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.449 06:53:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.449 06:53:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:20.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:16:20.449 00:16:20.449 --- 10.0.0.2 ping statistics --- 00:16:20.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.449 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:16:20.449 06:53:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:16:20.449 00:16:20.449 --- 10.0.0.1 ping statistics --- 00:16:20.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.449 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:16:20.449 06:53:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.449 06:53:34 -- nvmf/common.sh@410 -- # return 0 00:16:20.449 06:53:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:20.449 06:53:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.449 06:53:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:20.449 06:53:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:20.449 06:53:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.449 06:53:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:20.449 06:53:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:20.449 06:53:34 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:20.449 06:53:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:20.449 06:53:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:20.449 06:53:34 -- common/autotest_common.sh@10 -- # set +x 00:16:20.449 06:53:34 -- nvmf/common.sh@469 -- # nvmfpid=505692 00:16:20.449 06:53:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:20.449 06:53:34 -- nvmf/common.sh@470 -- # waitforlisten 505692 00:16:20.449 06:53:34 -- common/autotest_common.sh@819 -- # '[' -z 505692 ']' 00:16:20.449 06:53:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.449 06:53:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:20.449 06:53:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.449 06:53:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:20.449 06:53:34 -- common/autotest_common.sh@10 -- # set +x 00:16:20.449 [2024-05-15 06:53:34.386412] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:20.449 [2024-05-15 06:53:34.386495] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.449 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.449 [2024-05-15 06:53:34.466746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.449 [2024-05-15 06:53:34.586845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:20.449 [2024-05-15 06:53:34.587029] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.449 [2024-05-15 06:53:34.587050] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.449 [2024-05-15 06:53:34.587065] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.449 [2024-05-15 06:53:34.587121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.449 [2024-05-15 06:53:34.587180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.449 [2024-05-15 06:53:34.587232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.449 [2024-05-15 06:53:34.587235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.381 06:53:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:21.381 06:53:35 -- common/autotest_common.sh@852 -- # return 0 00:16:21.381 06:53:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:21.381 06:53:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:21.381 06:53:35 -- common/autotest_common.sh@10 -- # set +x 00:16:21.381 06:53:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.381 06:53:35 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:21.381 [2024-05-15 06:53:35.562029] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.381 06:53:35 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:21.638 06:53:35 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:21.638 06:53:35 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:21.895 06:53:36 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:21.895 06:53:36 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:22.152 06:53:36 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:22.152 06:53:36 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:22.410 06:53:36 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:22.410 06:53:36 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:22.667 06:53:36 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:22.925 06:53:37 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:22.925 06:53:37 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:23.182 06:53:37 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:23.182 06:53:37 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:23.439 06:53:37 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:23.439 06:53:37 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:23.697 06:53:37 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:23.954 06:53:38 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:23.954 06:53:38 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:24.212 06:53:38 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:24.212 06:53:38 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:24.469 06:53:38 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.726 [2024-05-15 06:53:38.772305] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.726 06:53:38 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:24.984 06:53:39 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:25.241 06:53:39 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.807 06:53:39 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:25.807 06:53:39 -- common/autotest_common.sh@1177 -- # local i=0 00:16:25.807 06:53:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.807 06:53:39 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:16:25.807 06:53:39 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:16:25.807 06:53:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:27.704 06:53:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:27.704 06:53:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:27.704 06:53:41 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.704 06:53:41 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:16:27.704 06:53:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.704 06:53:41 -- common/autotest_common.sh@1187 -- # return 0 00:16:27.704 06:53:41 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:27.704 [global] 00:16:27.704 thread=1 00:16:27.704 invalidate=1 00:16:27.704 rw=write 00:16:27.704 time_based=1 00:16:27.704 runtime=1 00:16:27.704 ioengine=libaio 00:16:27.704 direct=1 00:16:27.704 bs=4096 00:16:27.704 iodepth=1 00:16:27.704 norandommap=0 00:16:27.704 numjobs=1 00:16:27.704 00:16:27.704 verify_dump=1 00:16:27.704 verify_backlog=512 00:16:27.704 verify_state_save=0 00:16:27.704 do_verify=1 00:16:27.704 verify=crc32c-intel 00:16:27.704 [job0] 00:16:27.704 filename=/dev/nvme0n1 00:16:27.704 [job1] 00:16:27.704 filename=/dev/nvme0n2 00:16:27.704 [job2] 00:16:27.704 filename=/dev/nvme0n3 00:16:27.704 [job3] 00:16:27.704 filename=/dev/nvme0n4 00:16:27.704 Could not set queue depth (nvme0n1) 00:16:27.704 Could not set queue depth (nvme0n2) 00:16:27.704 Could not set queue depth (nvme0n3) 00:16:27.704 Could not set queue depth (nvme0n4) 00:16:27.961 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.961 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.961 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.961 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:27.961 fio-3.35 00:16:27.961 Starting 4 threads 00:16:29.335 00:16:29.335 job0: (groupid=0, jobs=1): err= 0: pid=506799: Wed May 15 06:53:43 2024 00:16:29.335 read: IOPS=1027, BW=4112KiB/s (4211kB/s)(4116KiB/1001msec) 00:16:29.335 slat (nsec): min=6631, max=66325, avg=19724.97, stdev=9313.85 00:16:29.335 clat (usec): min=442, max=812, avg=513.99, stdev=40.55 00:16:29.335 lat (usec): min=449, max=845, avg=533.71, stdev=47.30 00:16:29.335 clat percentiles (usec): 00:16:29.335 | 1.00th=[ 453], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 486], 00:16:29.335 | 30.00th=[ 490], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 515], 00:16:29.335 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 570], 95.00th=[ 594], 00:16:29.335 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 709], 99.95th=[ 816], 00:16:29.335 | 99.99th=[ 816] 00:16:29.335 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:29.335 slat (nsec): min=7692, max=58327, avg=14953.38, stdev=6635.94 00:16:29.335 clat (usec): min=221, max=1687, avg=270.89, stdev=68.50 00:16:29.335 lat (usec): min=230, max=1710, avg=285.85, stdev=70.90 00:16:29.335 clat percentiles (usec): 00:16:29.335 | 1.00th=[ 225], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 237], 00:16:29.335 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 277], 00:16:29.335 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 343], 00:16:29.335 | 99.00th=[ 412], 99.50th=[ 453], 99.90th=[ 1483], 99.95th=[ 1680], 00:16:29.335 | 99.99th=[ 1680] 00:16:29.335 bw ( KiB/s): min= 6424, max= 6424, per=52.14%, avg=6424.00, stdev= 0.00, samples=1 00:16:29.335 iops : min= 1606, max= 1606, avg=1606.00, stdev= 0.00, samples=1 00:16:29.335 lat (usec) : 250=24.72%, 500=53.14%, 750=21.91%, 1000=0.08% 00:16:29.335 lat (msec) : 2=0.16% 00:16:29.335 cpu : usr=2.90%, sys=4.60%, ctx=2567, majf=0, minf=1 00:16:29.335 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.335 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.335 issued rwts: total=1029,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.335 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.335 job1: (groupid=0, jobs=1): err= 0: pid=506800: Wed May 15 06:53:43 2024 00:16:29.335 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:29.335 slat (nsec): min=5521, max=42348, avg=14079.59, stdev=4101.32 00:16:29.335 clat (usec): min=346, max=41196, avg=1365.01, stdev=6139.99 00:16:29.335 lat (usec): min=356, max=41214, avg=1379.09, stdev=6141.62 00:16:29.335 clat percentiles (usec): 00:16:29.335 | 1.00th=[ 355], 5.00th=[ 367], 10.00th=[ 379], 20.00th=[ 392], 00:16:29.335 | 30.00th=[ 400], 40.00th=[ 404], 50.00th=[ 412], 60.00th=[ 416], 00:16:29.335 | 70.00th=[ 420], 80.00th=[ 429], 90.00th=[ 461], 95.00th=[ 529], 00:16:29.335 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:29.335 | 99.99th=[41157] 00:16:29.335 write: IOPS=627, BW=2509KiB/s (2570kB/s)(2512KiB/1001msec); 0 zone resets 00:16:29.335 slat (usec): min=7, max=40347, avg=84.84, stdev=1609.25 00:16:29.335 clat (usec): min=227, max=1931, avg=374.89, stdev=102.99 00:16:29.336 lat (usec): min=245, max=40953, avg=459.73, stdev=1621.86 00:16:29.336 clat percentiles (usec): 00:16:29.336 | 1.00th=[ 237], 5.00th=[ 269], 10.00th=[ 289], 20.00th=[ 322], 00:16:29.336 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 371], 00:16:29.336 | 70.00th=[ 383], 80.00th=[ 416], 90.00th=[ 478], 95.00th=[ 545], 00:16:29.336 | 99.00th=[ 644], 99.50th=[ 701], 99.90th=[ 1926], 99.95th=[ 1926], 00:16:29.336 | 99.99th=[ 1926] 00:16:29.336 bw ( KiB/s): min= 4096, max= 4096, per=33.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:29.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:29.336 lat (usec) : 250=1.49%, 500=90.79%, 750=6.32%, 1000=0.26% 00:16:29.336 lat (msec) : 2=0.09%, 50=1.05% 00:16:29.336 cpu : usr=0.90%, sys=2.10%, ctx=1142, majf=0, minf=2 00:16:29.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.336 issued rwts: total=512,628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.336 job2: (groupid=0, jobs=1): err= 0: pid=506802: Wed May 15 06:53:43 2024 00:16:29.336 read: IOPS=133, BW=533KiB/s (546kB/s)(552KiB/1035msec) 00:16:29.336 slat (nsec): min=7363, max=49869, avg=13565.70, stdev=7341.39 00:16:29.336 clat (usec): min=469, max=41318, avg=6198.33, stdev=13989.57 00:16:29.336 lat (usec): min=477, max=41357, avg=6211.90, stdev=13993.32 00:16:29.336 clat percentiles (usec): 00:16:29.336 | 1.00th=[ 474], 5.00th=[ 482], 10.00th=[ 486], 20.00th=[ 498], 00:16:29.336 | 30.00th=[ 502], 40.00th=[ 515], 50.00th=[ 519], 60.00th=[ 529], 00:16:29.336 | 70.00th=[ 562], 80.00th=[ 660], 90.00th=[41157], 95.00th=[41157], 00:16:29.336 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:29.336 | 99.99th=[41157] 00:16:29.336 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:16:29.336 slat (nsec): min=9404, max=74775, avg=19145.11, stdev=10002.05 00:16:29.336 clat (usec): min=241, max=2047, avg=320.60, stdev=96.50 00:16:29.336 lat (usec): min=251, max=2078, avg=339.75, stdev=99.42 00:16:29.336 clat percentiles (usec): 00:16:29.336 | 1.00th=[ 249], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 285], 00:16:29.336 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:16:29.336 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 371], 95.00th=[ 396], 00:16:29.336 | 99.00th=[ 449], 99.50th=[ 578], 99.90th=[ 2040], 99.95th=[ 2040], 00:16:29.336 | 99.99th=[ 2040] 00:16:29.336 bw ( KiB/s): min= 4096, max= 4096, per=33.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:29.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:29.336 lat (usec) : 250=0.92%, 500=82.15%, 750=13.08%, 1000=0.46% 00:16:29.336 lat (msec) : 2=0.15%, 4=0.15%, 20=0.15%, 50=2.92% 00:16:29.336 cpu : usr=0.68%, sys=1.45%, ctx=652, majf=0, minf=1 00:16:29.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.336 issued rwts: total=138,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.336 job3: (groupid=0, jobs=1): err= 0: pid=506803: Wed May 15 06:53:43 2024 00:16:29.336 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:16:29.336 slat (nsec): min=15081, max=34040, avg=21197.09, stdev=7979.08 00:16:29.336 clat (usec): min=556, max=42362, avg=37520.10, stdev=11965.84 00:16:29.336 lat (usec): min=579, max=42377, avg=37541.30, stdev=11966.36 00:16:29.336 clat percentiles (usec): 00:16:29.336 | 1.00th=[ 553], 5.00th=[ 594], 10.00th=[40633], 20.00th=[41157], 00:16:29.336 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:29.336 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:16:29.336 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:29.336 | 99.99th=[42206] 00:16:29.336 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:16:29.336 slat (nsec): min=7406, max=63041, avg=19306.93, stdev=8656.85 00:16:29.336 clat (usec): min=240, max=830, avg=347.89, stdev=53.78 00:16:29.336 lat (usec): min=256, max=858, avg=367.20, stdev=53.77 00:16:29.336 clat percentiles (usec): 00:16:29.336 | 1.00th=[ 251], 5.00th=[ 273], 10.00th=[ 293], 20.00th=[ 310], 00:16:29.336 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:16:29.336 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 416], 00:16:29.336 | 99.00th=[ 562], 99.50th=[ 644], 99.90th=[ 832], 99.95th=[ 832], 00:16:29.336 | 99.99th=[ 832] 00:16:29.336 bw ( KiB/s): min= 4096, max= 4096, per=33.24%, avg=4096.00, stdev= 0.00, samples=1 00:16:29.336 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:29.336 lat (usec) : 250=0.75%, 500=93.63%, 750=1.69%, 1000=0.19% 00:16:29.336 lat (msec) : 50=3.75% 00:16:29.336 cpu : usr=0.30%, sys=1.18%, ctx=537, majf=0, minf=1 00:16:29.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:29.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:29.336 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:29.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:29.336 00:16:29.336 Run status group 0 (all jobs): 00:16:29.336 READ: bw=6574KiB/s (6732kB/s), 86.5KiB/s-4112KiB/s (88.6kB/s-4211kB/s), io=6804KiB (6967kB), run=1001-1035msec 00:16:29.336 WRITE: bw=12.0MiB/s (12.6MB/s), 1979KiB/s-6138KiB/s (2026kB/s-6285kB/s), io=12.5MiB (13.1MB), run=1001-1035msec 00:16:29.336 00:16:29.336 Disk stats (read/write): 00:16:29.336 nvme0n1: ios=1079/1024, merge=0/0, ticks=573/267, in_queue=840, util=87.37% 00:16:29.336 nvme0n2: ios=518/512, merge=0/0, ticks=1501/178, in_queue=1679, util=89.19% 00:16:29.336 nvme0n3: ios=79/512, merge=0/0, ticks=775/157, in_queue=932, util=95.58% 00:16:29.336 nvme0n4: ios=80/512, merge=0/0, ticks=757/171, in_queue=928, util=96.39% 00:16:29.336 06:53:43 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:29.336 [global] 00:16:29.336 thread=1 00:16:29.336 invalidate=1 00:16:29.336 rw=randwrite 00:16:29.336 time_based=1 00:16:29.336 runtime=1 00:16:29.336 ioengine=libaio 00:16:29.336 direct=1 00:16:29.336 bs=4096 00:16:29.336 iodepth=1 00:16:29.336 norandommap=0 00:16:29.336 numjobs=1 00:16:29.336 00:16:29.336 verify_dump=1 00:16:29.336 verify_backlog=512 00:16:29.336 verify_state_save=0 00:16:29.336 do_verify=1 00:16:29.336 verify=crc32c-intel 00:16:29.336 [job0] 00:16:29.336 filename=/dev/nvme0n1 00:16:29.336 [job1] 00:16:29.336 filename=/dev/nvme0n2 00:16:29.336 [job2] 00:16:29.336 filename=/dev/nvme0n3 00:16:29.336 [job3] 00:16:29.336 filename=/dev/nvme0n4 00:16:29.336 Could not set queue depth (nvme0n1) 00:16:29.336 Could not set queue depth (nvme0n2) 00:16:29.336 Could not set queue depth (nvme0n3) 00:16:29.336 Could not set queue depth (nvme0n4) 00:16:29.336 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.336 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.336 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.336 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:29.336 fio-3.35 00:16:29.336 Starting 4 threads 00:16:30.704 00:16:30.704 job0: (groupid=0, jobs=1): err= 0: pid=507033: Wed May 15 06:53:44 2024 00:16:30.704 read: IOPS=953, BW=3812KiB/s (3904kB/s)(3816KiB/1001msec) 00:16:30.704 slat (nsec): min=5715, max=72309, avg=14477.92, stdev=10398.23 00:16:30.704 clat (usec): min=406, max=1310, avg=571.89, stdev=143.14 00:16:30.704 lat (usec): min=412, max=1344, avg=586.37, stdev=151.19 00:16:30.704 clat percentiles (usec): 00:16:30.704 | 1.00th=[ 412], 5.00th=[ 416], 10.00th=[ 420], 20.00th=[ 424], 00:16:30.704 | 30.00th=[ 433], 40.00th=[ 453], 50.00th=[ 594], 60.00th=[ 635], 00:16:30.704 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 807], 00:16:30.704 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 1319], 99.95th=[ 1319], 00:16:30.704 | 99.99th=[ 1319] 00:16:30.704 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:30.704 slat (nsec): min=7925, max=72699, avg=19525.88, stdev=9612.05 00:16:30.704 clat (usec): min=256, max=854, avg=401.48, stdev=60.36 00:16:30.704 lat (usec): min=286, max=881, avg=421.00, stdev=62.73 00:16:30.704 clat percentiles (usec): 00:16:30.704 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 326], 20.00th=[ 347], 00:16:30.704 | 30.00th=[ 363], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 416], 00:16:30.704 | 70.00th=[ 433], 80.00th=[ 449], 90.00th=[ 474], 95.00th=[ 510], 00:16:30.704 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 611], 99.95th=[ 857], 00:16:30.704 | 99.99th=[ 857] 00:16:30.704 bw ( KiB/s): min= 4888, max= 4888, per=48.74%, avg=4888.00, stdev= 0.00, samples=1 00:16:30.704 iops : min= 1222, max= 1222, avg=1222.00, stdev= 0.00, samples=1 00:16:30.704 lat (usec) : 500=70.02%, 750=23.56%, 1000=6.37% 00:16:30.704 lat (msec) : 2=0.05% 00:16:30.704 cpu : usr=2.50%, sys=4.40%, ctx=1980, majf=0, minf=1 00:16:30.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.704 issued rwts: total=954,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.705 job1: (groupid=0, jobs=1): err= 0: pid=507034: Wed May 15 06:53:44 2024 00:16:30.705 read: IOPS=20, BW=82.3KiB/s (84.2kB/s)(84.0KiB/1021msec) 00:16:30.705 slat (nsec): min=8323, max=36747, avg=17970.29, stdev=9047.71 00:16:30.705 clat (usec): min=405, max=42136, avg=39099.82, stdev=8869.66 00:16:30.705 lat (usec): min=416, max=42153, avg=39117.79, stdev=8871.34 00:16:30.705 clat percentiles (usec): 00:16:30.705 | 1.00th=[ 408], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:30.705 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:30.705 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:30.705 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:30.705 | 99.99th=[42206] 00:16:30.705 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:16:30.705 slat (nsec): min=9284, max=75098, avg=22192.04, stdev=12316.70 00:16:30.705 clat (usec): min=236, max=561, avg=361.69, stdev=83.80 00:16:30.705 lat (usec): min=247, max=610, avg=383.88, stdev=90.35 00:16:30.705 clat percentiles (usec): 00:16:30.705 | 1.00th=[ 243], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 281], 00:16:30.705 | 30.00th=[ 293], 40.00th=[ 314], 50.00th=[ 351], 60.00th=[ 392], 00:16:30.705 | 70.00th=[ 424], 80.00th=[ 449], 90.00th=[ 478], 95.00th=[ 498], 00:16:30.705 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 562], 99.95th=[ 562], 00:16:30.705 | 99.99th=[ 562] 00:16:30.705 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:16:30.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:30.705 lat (usec) : 250=5.25%, 500=86.68%, 750=4.32% 00:16:30.705 lat (msec) : 50=3.75% 00:16:30.705 cpu : usr=0.59%, sys=1.57%, ctx=535, majf=0, minf=2 00:16:30.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.705 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.705 job2: (groupid=0, jobs=1): err= 0: pid=507037: Wed May 15 06:53:44 2024 00:16:30.705 read: IOPS=19, BW=79.1KiB/s (80.9kB/s)(80.0KiB/1012msec) 00:16:30.705 slat (nsec): min=11419, max=35663, avg=18543.10, stdev=8844.77 00:16:30.705 clat (usec): min=41913, max=42089, avg=41984.56, stdev=43.72 00:16:30.705 lat (usec): min=41948, max=42100, avg=42003.10, stdev=40.18 00:16:30.705 clat percentiles (usec): 00:16:30.705 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:16:30.705 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:30.705 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:30.705 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:30.705 | 99.99th=[42206] 00:16:30.705 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:16:30.705 slat (nsec): min=9316, max=73371, avg=21077.13, stdev=10719.13 00:16:30.705 clat (usec): min=234, max=1649, avg=309.07, stdev=87.26 00:16:30.705 lat (usec): min=245, max=1668, avg=330.14, stdev=88.54 00:16:30.705 clat percentiles (usec): 00:16:30.705 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 269], 00:16:30.705 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 306], 00:16:30.705 | 70.00th=[ 318], 80.00th=[ 338], 90.00th=[ 359], 95.00th=[ 392], 00:16:30.705 | 99.00th=[ 441], 99.50th=[ 529], 99.90th=[ 1647], 99.95th=[ 1647], 00:16:30.705 | 99.99th=[ 1647] 00:16:30.705 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:16:30.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:30.705 lat (usec) : 250=4.51%, 500=91.17%, 750=0.19% 00:16:30.705 lat (msec) : 2=0.38%, 50=3.76% 00:16:30.705 cpu : usr=1.09%, sys=0.89%, ctx=533, majf=0, minf=1 00:16:30.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.705 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.705 job3: (groupid=0, jobs=1): err= 0: pid=507042: Wed May 15 06:53:44 2024 00:16:30.705 read: IOPS=19, BW=78.9KiB/s (80.8kB/s)(80.0KiB/1014msec) 00:16:30.705 slat (nsec): min=12567, max=42941, avg=20128.45, stdev=9961.58 00:16:30.705 clat (usec): min=21292, max=41346, avg=40009.07, stdev=4406.36 00:16:30.705 lat (usec): min=21325, max=41361, avg=40029.20, stdev=4403.24 00:16:30.705 clat percentiles (usec): 00:16:30.705 | 1.00th=[21365], 5.00th=[21365], 10.00th=[41157], 20.00th=[41157], 00:16:30.705 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:30.705 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:30.705 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:30.705 | 99.99th=[41157] 00:16:30.705 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:16:30.705 slat (nsec): min=7093, max=69057, avg=22290.16, stdev=11382.45 00:16:30.705 clat (usec): min=236, max=1527, avg=387.17, stdev=101.94 00:16:30.705 lat (usec): min=246, max=1561, avg=409.46, stdev=105.85 00:16:30.705 clat percentiles (usec): 00:16:30.705 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 289], 20.00th=[ 306], 00:16:30.705 | 30.00th=[ 318], 40.00th=[ 351], 50.00th=[ 383], 60.00th=[ 412], 00:16:30.705 | 70.00th=[ 437], 80.00th=[ 453], 90.00th=[ 482], 95.00th=[ 502], 00:16:30.705 | 99.00th=[ 644], 99.50th=[ 701], 99.90th=[ 1532], 99.95th=[ 1532], 00:16:30.705 | 99.99th=[ 1532] 00:16:30.705 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:16:30.705 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:30.705 lat (usec) : 250=0.75%, 500=90.41%, 750=4.70% 00:16:30.705 lat (msec) : 2=0.38%, 50=3.76% 00:16:30.705 cpu : usr=0.69%, sys=1.09%, ctx=533, majf=0, minf=1 00:16:30.705 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:30.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.705 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:30.705 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:30.705 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:30.705 00:16:30.705 Run status group 0 (all jobs): 00:16:30.705 READ: bw=3976KiB/s (4072kB/s), 78.9KiB/s-3812KiB/s (80.8kB/s-3904kB/s), io=4060KiB (4157kB), run=1001-1021msec 00:16:30.705 WRITE: bw=9.79MiB/s (10.3MB/s), 2006KiB/s-4092KiB/s (2054kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1021msec 00:16:30.705 00:16:30.705 Disk stats (read/write): 00:16:30.705 nvme0n1: ios=782/1024, merge=0/0, ticks=1373/375, in_queue=1748, util=98.00% 00:16:30.705 nvme0n2: ios=56/512, merge=0/0, ticks=800/177, in_queue=977, util=98.68% 00:16:30.705 nvme0n3: ios=68/512, merge=0/0, ticks=891/142, in_queue=1033, util=98.75% 00:16:30.705 nvme0n4: ios=40/512, merge=0/0, ticks=1602/189, in_queue=1791, util=98.84% 00:16:30.705 06:53:44 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:30.705 [global] 00:16:30.705 thread=1 00:16:30.705 invalidate=1 00:16:30.705 rw=write 00:16:30.705 time_based=1 00:16:30.705 runtime=1 00:16:30.705 ioengine=libaio 00:16:30.705 direct=1 00:16:30.705 bs=4096 00:16:30.705 iodepth=128 00:16:30.705 norandommap=0 00:16:30.705 numjobs=1 00:16:30.705 00:16:30.705 verify_dump=1 00:16:30.705 verify_backlog=512 00:16:30.705 verify_state_save=0 00:16:30.705 do_verify=1 00:16:30.705 verify=crc32c-intel 00:16:30.705 [job0] 00:16:30.705 filename=/dev/nvme0n1 00:16:30.705 [job1] 00:16:30.705 filename=/dev/nvme0n2 00:16:30.705 [job2] 00:16:30.705 filename=/dev/nvme0n3 00:16:30.705 [job3] 00:16:30.705 filename=/dev/nvme0n4 00:16:30.705 Could not set queue depth (nvme0n1) 00:16:30.705 Could not set queue depth (nvme0n2) 00:16:30.705 Could not set queue depth (nvme0n3) 00:16:30.705 Could not set queue depth (nvme0n4) 00:16:30.962 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:30.963 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:30.963 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:30.963 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:30.963 fio-3.35 00:16:30.963 Starting 4 threads 00:16:32.334 00:16:32.334 job0: (groupid=0, jobs=1): err= 0: pid=507270: Wed May 15 06:53:46 2024 00:16:32.334 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:16:32.334 slat (usec): min=3, max=11599, avg=108.44, stdev=662.34 00:16:32.334 clat (usec): min=6331, max=33998, avg=13561.92, stdev=4538.57 00:16:32.334 lat (usec): min=6892, max=34005, avg=13670.36, stdev=4570.22 00:16:32.334 clat percentiles (usec): 00:16:32.334 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9765], 00:16:32.334 | 30.00th=[10814], 40.00th=[11731], 50.00th=[12387], 60.00th=[13435], 00:16:32.334 | 70.00th=[14484], 80.00th=[15926], 90.00th=[20579], 95.00th=[22938], 00:16:32.334 | 99.00th=[29230], 99.50th=[29492], 99.90th=[33817], 99.95th=[33817], 00:16:32.334 | 99.99th=[33817] 00:16:32.334 write: IOPS=4111, BW=16.1MiB/s (16.8MB/s)(16.2MiB/1006msec); 0 zone resets 00:16:32.334 slat (usec): min=4, max=10036, avg=125.05, stdev=621.72 00:16:32.334 clat (usec): min=1547, max=44355, avg=17425.60, stdev=7128.82 00:16:32.334 lat (usec): min=1570, max=44362, avg=17550.65, stdev=7169.46 00:16:32.334 clat percentiles (usec): 00:16:32.334 | 1.00th=[ 5211], 5.00th=[ 7111], 10.00th=[ 8848], 20.00th=[11994], 00:16:32.334 | 30.00th=[14091], 40.00th=[15664], 50.00th=[16909], 60.00th=[18220], 00:16:32.334 | 70.00th=[19006], 80.00th=[21365], 90.00th=[25560], 95.00th=[32375], 00:16:32.334 | 99.00th=[40633], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:16:32.334 | 99.99th=[44303] 00:16:32.334 bw ( KiB/s): min=16208, max=16560, per=33.40%, avg=16384.00, stdev=248.90, samples=2 00:16:32.334 iops : min= 4052, max= 4140, avg=4096.00, stdev=62.23, samples=2 00:16:32.334 lat (msec) : 2=0.10%, 4=0.19%, 10=18.23%, 20=63.00%, 50=18.48% 00:16:32.334 cpu : usr=5.37%, sys=7.16%, ctx=403, majf=0, minf=15 00:16:32.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:32.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.334 issued rwts: total=4096,4136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.334 job1: (groupid=0, jobs=1): err= 0: pid=507271: Wed May 15 06:53:46 2024 00:16:32.334 read: IOPS=2523, BW=9.86MiB/s (10.3MB/s)(10.3MiB/1043msec) 00:16:32.334 slat (usec): min=2, max=21309, avg=157.99, stdev=1040.50 00:16:32.334 clat (usec): min=5605, max=75342, avg=21020.21, stdev=12722.08 00:16:32.334 lat (usec): min=5638, max=75371, avg=21178.20, stdev=12814.96 00:16:32.334 clat percentiles (usec): 00:16:32.334 | 1.00th=[ 7373], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[10421], 00:16:32.334 | 30.00th=[11731], 40.00th=[14091], 50.00th=[18744], 60.00th=[22414], 00:16:32.334 | 70.00th=[25560], 80.00th=[28443], 90.00th=[34866], 95.00th=[48497], 00:16:32.334 | 99.00th=[69731], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:16:32.334 | 99.99th=[74974] 00:16:32.334 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec); 0 zone resets 00:16:32.334 slat (usec): min=3, max=17489, avg=161.23, stdev=857.02 00:16:32.334 clat (usec): min=1493, max=106199, avg=25072.05, stdev=18835.10 00:16:32.334 lat (usec): min=1504, max=106212, avg=25233.28, stdev=18963.60 00:16:32.334 clat percentiles (msec): 00:16:32.334 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:16:32.334 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 19], 60.00th=[ 26], 00:16:32.334 | 70.00th=[ 31], 80.00th=[ 39], 90.00th=[ 57], 95.00th=[ 62], 00:16:32.334 | 99.00th=[ 88], 99.50th=[ 96], 99.90th=[ 101], 99.95th=[ 107], 00:16:32.334 | 99.99th=[ 107] 00:16:32.334 bw ( KiB/s): min=10008, max=14120, per=24.59%, avg=12064.00, stdev=2907.62, samples=2 00:16:32.334 iops : min= 2502, max= 3530, avg=3016.00, stdev=726.91, samples=2 00:16:32.334 lat (msec) : 2=0.05%, 4=0.54%, 10=19.11%, 20=33.56%, 50=37.43% 00:16:32.334 lat (msec) : 100=9.20%, 250=0.11% 00:16:32.334 cpu : usr=2.30%, sys=3.65%, ctx=392, majf=0, minf=11 00:16:32.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:32.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.334 issued rwts: total=2632,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.334 job2: (groupid=0, jobs=1): err= 0: pid=507272: Wed May 15 06:53:46 2024 00:16:32.334 read: IOPS=1956, BW=7824KiB/s (8012kB/s)(8192KiB/1047msec) 00:16:32.334 slat (usec): min=3, max=44532, avg=175.70, stdev=1220.38 00:16:32.334 clat (usec): min=4931, max=58973, avg=20738.89, stdev=11642.49 00:16:32.334 lat (usec): min=4938, max=58981, avg=20914.58, stdev=11702.71 00:16:32.334 clat percentiles (usec): 00:16:32.334 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[12256], 20.00th=[13304], 00:16:32.334 | 30.00th=[14091], 40.00th=[15139], 50.00th=[16909], 60.00th=[18482], 00:16:32.334 | 70.00th=[20317], 80.00th=[24249], 90.00th=[38011], 95.00th=[52167], 00:16:32.334 | 99.00th=[57934], 99.50th=[57934], 99.90th=[58459], 99.95th=[58983], 00:16:32.334 | 99.99th=[58983] 00:16:32.334 write: IOPS=2336, BW=9345KiB/s (9569kB/s)(9784KiB/1047msec); 0 zone resets 00:16:32.334 slat (usec): min=4, max=42523, avg=253.07, stdev=1279.39 00:16:32.334 clat (usec): min=1923, max=108721, avg=33411.92, stdev=19263.39 00:16:32.334 lat (usec): min=1944, max=108742, avg=33664.99, stdev=19387.49 00:16:32.334 clat percentiles (msec): 00:16:32.334 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:16:32.334 | 30.00th=[ 17], 40.00th=[ 29], 50.00th=[ 34], 60.00th=[ 39], 00:16:32.334 | 70.00th=[ 42], 80.00th=[ 51], 90.00th=[ 57], 95.00th=[ 62], 00:16:32.334 | 99.00th=[ 96], 99.50th=[ 102], 99.90th=[ 107], 99.95th=[ 108], 00:16:32.334 | 99.99th=[ 109] 00:16:32.334 bw ( KiB/s): min= 8256, max=10308, per=18.92%, avg=9282.00, stdev=1450.98, samples=2 00:16:32.334 iops : min= 2064, max= 2577, avg=2320.50, stdev=362.75, samples=2 00:16:32.334 lat (msec) : 2=0.09%, 4=0.45%, 10=4.74%, 20=42.57%, 50=38.16% 00:16:32.334 lat (msec) : 100=13.60%, 250=0.40% 00:16:32.334 cpu : usr=2.87%, sys=4.59%, ctx=474, majf=0, minf=13 00:16:32.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:32.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.334 issued rwts: total=2048,2446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.334 job3: (groupid=0, jobs=1): err= 0: pid=507273: Wed May 15 06:53:46 2024 00:16:32.334 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:16:32.334 slat (usec): min=3, max=22836, avg=180.11, stdev=1110.68 00:16:32.334 clat (usec): min=9205, max=52538, avg=23335.83, stdev=8529.45 00:16:32.334 lat (usec): min=9213, max=52583, avg=23515.95, stdev=8605.77 00:16:32.334 clat percentiles (usec): 00:16:32.334 | 1.00th=[11994], 5.00th=[13304], 10.00th=[13829], 20.00th=[14877], 00:16:32.334 | 30.00th=[16712], 40.00th=[19006], 50.00th=[21627], 60.00th=[25035], 00:16:32.334 | 70.00th=[27395], 80.00th=[31065], 90.00th=[35390], 95.00th=[40109], 00:16:32.334 | 99.00th=[44303], 99.50th=[44827], 99.90th=[47449], 99.95th=[52167], 00:16:32.334 | 99.99th=[52691] 00:16:32.334 write: IOPS=3167, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1006msec); 0 zone resets 00:16:32.334 slat (usec): min=4, max=11707, avg=130.58, stdev=795.52 00:16:32.334 clat (usec): min=5280, max=38811, avg=17155.45, stdev=5064.04 00:16:32.334 lat (usec): min=5307, max=38821, avg=17286.03, stdev=5131.54 00:16:32.334 clat percentiles (usec): 00:16:32.334 | 1.00th=[ 7570], 5.00th=[11863], 10.00th=[12125], 20.00th=[12649], 00:16:32.334 | 30.00th=[13304], 40.00th=[14353], 50.00th=[16057], 60.00th=[17433], 00:16:32.334 | 70.00th=[19792], 80.00th=[21627], 90.00th=[23987], 95.00th=[25560], 00:16:32.334 | 99.00th=[30802], 99.50th=[34341], 99.90th=[36963], 99.95th=[38536], 00:16:32.334 | 99.99th=[39060] 00:16:32.334 bw ( KiB/s): min=12288, max=12288, per=25.05%, avg=12288.00, stdev= 0.00, samples=2 00:16:32.334 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:32.334 lat (msec) : 10=0.99%, 20=56.33%, 50=42.64%, 100=0.03% 00:16:32.334 cpu : usr=3.68%, sys=5.87%, ctx=214, majf=0, minf=11 00:16:32.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:32.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.334 issued rwts: total=3072,3187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.334 00:16:32.334 Run status group 0 (all jobs): 00:16:32.334 READ: bw=44.2MiB/s (46.3MB/s), 7824KiB/s-15.9MiB/s (8012kB/s-16.7MB/s), io=46.3MiB (48.5MB), run=1006-1047msec 00:16:32.334 WRITE: bw=47.9MiB/s (50.2MB/s), 9345KiB/s-16.1MiB/s (9569kB/s-16.8MB/s), io=50.2MiB (52.6MB), run=1006-1047msec 00:16:32.334 00:16:32.334 Disk stats (read/write): 00:16:32.334 nvme0n1: ios=3248/3584, merge=0/0, ticks=44046/61324, in_queue=105370, util=87.98% 00:16:32.334 nvme0n2: ios=2610/2636, merge=0/0, ticks=50489/54040, in_queue=104529, util=92.17% 00:16:32.334 nvme0n3: ios=1771/2048, merge=0/0, ticks=36948/58920, in_queue=95868, util=99.79% 00:16:32.334 nvme0n4: ios=2545/2565, merge=0/0, ticks=21701/13608, in_queue=35309, util=96.85% 00:16:32.334 06:53:46 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:32.334 [global] 00:16:32.334 thread=1 00:16:32.334 invalidate=1 00:16:32.334 rw=randwrite 00:16:32.334 time_based=1 00:16:32.334 runtime=1 00:16:32.334 ioengine=libaio 00:16:32.334 direct=1 00:16:32.334 bs=4096 00:16:32.334 iodepth=128 00:16:32.334 norandommap=0 00:16:32.334 numjobs=1 00:16:32.334 00:16:32.334 verify_dump=1 00:16:32.334 verify_backlog=512 00:16:32.334 verify_state_save=0 00:16:32.334 do_verify=1 00:16:32.334 verify=crc32c-intel 00:16:32.334 [job0] 00:16:32.334 filename=/dev/nvme0n1 00:16:32.334 [job1] 00:16:32.334 filename=/dev/nvme0n2 00:16:32.334 [job2] 00:16:32.334 filename=/dev/nvme0n3 00:16:32.334 [job3] 00:16:32.334 filename=/dev/nvme0n4 00:16:32.334 Could not set queue depth (nvme0n1) 00:16:32.334 Could not set queue depth (nvme0n2) 00:16:32.335 Could not set queue depth (nvme0n3) 00:16:32.335 Could not set queue depth (nvme0n4) 00:16:32.335 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.335 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.335 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.335 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.335 fio-3.35 00:16:32.335 Starting 4 threads 00:16:33.707 00:16:33.707 job0: (groupid=0, jobs=1): err= 0: pid=507522: Wed May 15 06:53:47 2024 00:16:33.707 read: IOPS=1604, BW=6419KiB/s (6573kB/s)(6528KiB/1017msec) 00:16:33.707 slat (usec): min=3, max=25753, avg=228.26, stdev=1294.48 00:16:33.707 clat (usec): min=8769, max=61207, avg=25729.76, stdev=9405.76 00:16:33.707 lat (usec): min=9012, max=61221, avg=25958.02, stdev=9487.12 00:16:33.707 clat percentiles (usec): 00:16:33.707 | 1.00th=[10421], 5.00th=[13829], 10.00th=[14877], 20.00th=[16712], 00:16:33.707 | 30.00th=[17957], 40.00th=[21365], 50.00th=[25297], 60.00th=[27395], 00:16:33.707 | 70.00th=[30540], 80.00th=[35390], 90.00th=[39584], 95.00th=[42206], 00:16:33.707 | 99.00th=[45876], 99.50th=[51119], 99.90th=[52691], 99.95th=[61080], 00:16:33.707 | 99.99th=[61080] 00:16:33.707 write: IOPS=2013, BW=8055KiB/s (8248kB/s)(8192KiB/1017msec); 0 zone resets 00:16:33.707 slat (usec): min=4, max=25494, avg=296.18, stdev=1175.58 00:16:33.707 clat (msec): min=2, max=111, avg=42.49, stdev=17.67 00:16:33.707 lat (msec): min=2, max=111, avg=42.78, stdev=17.78 00:16:33.707 clat percentiles (msec): 00:16:33.707 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 16], 20.00th=[ 32], 00:16:33.707 | 30.00th=[ 39], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 45], 00:16:33.708 | 70.00th=[ 50], 80.00th=[ 55], 90.00th=[ 61], 95.00th=[ 77], 00:16:33.708 | 99.00th=[ 99], 99.50th=[ 105], 99.90th=[ 107], 99.95th=[ 107], 00:16:33.708 | 99.99th=[ 112] 00:16:33.708 bw ( KiB/s): min= 7936, max= 8192, per=16.25%, avg=8064.00, stdev=181.02, samples=2 00:16:33.708 iops : min= 1984, max= 2048, avg=2016.00, stdev=45.25, samples=2 00:16:33.708 lat (msec) : 4=0.49%, 10=0.73%, 20=22.99%, 50=59.02%, 100=16.22% 00:16:33.708 lat (msec) : 250=0.54% 00:16:33.708 cpu : usr=1.97%, sys=3.25%, ctx=355, majf=0, minf=1 00:16:33.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:33.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.708 issued rwts: total=1632,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.708 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.708 job1: (groupid=0, jobs=1): err= 0: pid=507535: Wed May 15 06:53:47 2024 00:16:33.708 read: IOPS=3906, BW=15.3MiB/s (16.0MB/s)(15.5MiB/1016msec) 00:16:33.708 slat (usec): min=2, max=17890, avg=126.74, stdev=751.76 00:16:33.708 clat (usec): min=6993, max=39619, avg=16621.90, stdev=6082.75 00:16:33.708 lat (usec): min=7123, max=39624, avg=16748.64, stdev=6085.55 00:16:33.708 clat percentiles (usec): 00:16:33.708 | 1.00th=[ 7635], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[10552], 00:16:33.708 | 30.00th=[11731], 40.00th=[15533], 50.00th=[16450], 60.00th=[17433], 00:16:33.708 | 70.00th=[19268], 80.00th=[21365], 90.00th=[24773], 95.00th=[27132], 00:16:33.708 | 99.00th=[34341], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:16:33.708 | 99.99th=[39584] 00:16:33.708 write: IOPS=4031, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1016msec); 0 zone resets 00:16:33.708 slat (usec): min=3, max=6291, avg=116.22, stdev=556.47 00:16:33.708 clat (usec): min=8093, max=23911, avg=15152.35, stdev=2969.92 00:16:33.708 lat (usec): min=8111, max=23915, avg=15268.57, stdev=2946.89 00:16:33.708 clat percentiles (usec): 00:16:33.708 | 1.00th=[ 9372], 5.00th=[10814], 10.00th=[11600], 20.00th=[12911], 00:16:33.708 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14353], 60.00th=[15139], 00:16:33.708 | 70.00th=[16712], 80.00th=[17957], 90.00th=[19530], 95.00th=[20055], 00:16:33.708 | 99.00th=[22152], 99.50th=[22414], 99.90th=[23462], 99.95th=[23462], 00:16:33.708 | 99.99th=[23987] 00:16:33.708 bw ( KiB/s): min=15895, max=16904, per=33.04%, avg=16399.50, stdev=713.47, samples=2 00:16:33.708 iops : min= 3973, max= 4226, avg=4099.50, stdev=178.90, samples=2 00:16:33.708 lat (msec) : 10=8.52%, 20=75.18%, 50=16.31% 00:16:33.708 cpu : usr=3.15%, sys=5.32%, ctx=432, majf=0, minf=1 00:16:33.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:33.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.708 issued rwts: total=3969,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.708 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.708 job2: (groupid=0, jobs=1): err= 0: pid=507574: Wed May 15 06:53:47 2024 00:16:33.708 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec) 00:16:33.708 slat (usec): min=3, max=7724, avg=129.54, stdev=661.30 00:16:33.708 clat (usec): min=9962, max=47892, avg=16855.62, stdev=4560.64 00:16:33.708 lat (usec): min=9973, max=48848, avg=16985.17, stdev=4603.94 00:16:33.708 clat percentiles (usec): 00:16:33.708 | 1.00th=[11338], 5.00th=[12780], 10.00th=[13435], 20.00th=[13566], 00:16:33.708 | 30.00th=[13829], 40.00th=[14746], 50.00th=[15533], 60.00th=[16712], 00:16:33.708 | 70.00th=[18220], 80.00th=[19530], 90.00th=[21627], 95.00th=[22938], 00:16:33.708 | 99.00th=[39060], 99.50th=[44303], 99.90th=[47973], 99.95th=[47973], 00:16:33.708 | 99.99th=[47973] 00:16:33.708 write: IOPS=3483, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1017msec); 0 zone resets 00:16:33.708 slat (usec): min=4, max=30459, avg=163.25, stdev=842.90 00:16:33.708 clat (msec): min=7, max=102, avg=21.58, stdev=18.78 00:16:33.708 lat (msec): min=7, max=102, avg=21.74, stdev=18.92 00:16:33.708 clat percentiles (msec): 00:16:33.708 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:16:33.708 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:16:33.708 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 52], 95.00th=[ 66], 00:16:33.708 | 99.00th=[ 99], 99.50th=[ 100], 99.90th=[ 102], 99.95th=[ 102], 00:16:33.708 | 99.99th=[ 103] 00:16:33.708 bw ( KiB/s): min=10936, max=16384, per=27.52%, avg=13660.00, stdev=3852.32, samples=2 00:16:33.708 iops : min= 2734, max= 4096, avg=3415.00, stdev=963.08, samples=2 00:16:33.708 lat (msec) : 10=0.11%, 20=81.18%, 50=13.20%, 100=5.43%, 250=0.09% 00:16:33.708 cpu : usr=3.44%, sys=5.22%, ctx=457, majf=0, minf=1 00:16:33.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:33.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.708 issued rwts: total=3072,3543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.708 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.708 job3: (groupid=0, jobs=1): err= 0: pid=507580: Wed May 15 06:53:47 2024 00:16:33.708 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:16:33.708 slat (usec): min=2, max=11632, avg=180.70, stdev=862.77 00:16:33.708 clat (usec): min=8368, max=60510, avg=20342.84, stdev=13364.73 00:16:33.708 lat (usec): min=8376, max=60525, avg=20523.54, stdev=13487.69 00:16:33.708 clat percentiles (usec): 00:16:33.708 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10814], 20.00th=[11600], 00:16:33.708 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12780], 60.00th=[14222], 00:16:33.708 | 70.00th=[19530], 80.00th=[34866], 90.00th=[46924], 95.00th=[49546], 00:16:33.708 | 99.00th=[56361], 99.50th=[57934], 99.90th=[57934], 99.95th=[58983], 00:16:33.708 | 99.99th=[60556] 00:16:33.708 write: IOPS=2923, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1003msec); 0 zone resets 00:16:33.708 slat (usec): min=3, max=22249, avg=175.61, stdev=781.04 00:16:33.708 clat (usec): min=777, max=71527, avg=25468.77, stdev=15945.66 00:16:33.708 lat (usec): min=3094, max=71541, avg=25644.37, stdev=16041.67 00:16:33.708 clat percentiles (usec): 00:16:33.708 | 1.00th=[ 3425], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[11994], 00:16:33.708 | 30.00th=[13566], 40.00th=[14746], 50.00th=[19006], 60.00th=[27395], 00:16:33.708 | 70.00th=[34341], 80.00th=[38011], 90.00th=[51643], 95.00th=[59507], 00:16:33.708 | 99.00th=[62653], 99.50th=[63701], 99.90th=[69731], 99.95th=[69731], 00:16:33.708 | 99.99th=[71828] 00:16:33.708 bw ( KiB/s): min= 8248, max=14184, per=22.60%, avg=11216.00, stdev=4197.39, samples=2 00:16:33.708 iops : min= 2062, max= 3546, avg=2804.00, stdev=1049.35, samples=2 00:16:33.708 lat (usec) : 1000=0.02% 00:16:33.708 lat (msec) : 4=0.58%, 10=5.61%, 20=54.24%, 50=32.21%, 100=7.34% 00:16:33.708 cpu : usr=3.39%, sys=4.59%, ctx=479, majf=0, minf=1 00:16:33.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:33.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:33.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:33.708 issued rwts: total=2560,2932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:33.708 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:33.708 00:16:33.708 Run status group 0 (all jobs): 00:16:33.708 READ: bw=43.1MiB/s (45.2MB/s), 6419KiB/s-15.3MiB/s (6573kB/s-16.0MB/s), io=43.9MiB (46.0MB), run=1003-1017msec 00:16:33.708 WRITE: bw=48.5MiB/s (50.8MB/s), 8055KiB/s-15.7MiB/s (8248kB/s-16.5MB/s), io=49.3MiB (51.7MB), run=1003-1017msec 00:16:33.708 00:16:33.708 Disk stats (read/write): 00:16:33.708 nvme0n1: ios=1577/1705, merge=0/0, ticks=38605/62565, in_queue=101170, util=91.18% 00:16:33.708 nvme0n2: ios=3115/3332, merge=0/0, ticks=14458/12327, in_queue=26785, util=91.35% 00:16:33.708 nvme0n3: ios=3128/3247, merge=0/0, ticks=16745/17485, in_queue=34230, util=91.40% 00:16:33.708 nvme0n4: ios=2077/2051, merge=0/0, ticks=20155/25479, in_queue=45634, util=99.26% 00:16:33.708 06:53:47 -- target/fio.sh@55 -- # sync 00:16:33.708 06:53:47 -- target/fio.sh@59 -- # fio_pid=507774 00:16:33.708 06:53:47 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:33.708 06:53:47 -- target/fio.sh@61 -- # sleep 3 00:16:33.708 [global] 00:16:33.708 thread=1 00:16:33.708 invalidate=1 00:16:33.708 rw=read 00:16:33.708 time_based=1 00:16:33.708 runtime=10 00:16:33.708 ioengine=libaio 00:16:33.708 direct=1 00:16:33.708 bs=4096 00:16:33.708 iodepth=1 00:16:33.708 norandommap=1 00:16:33.708 numjobs=1 00:16:33.708 00:16:33.708 [job0] 00:16:33.708 filename=/dev/nvme0n1 00:16:33.708 [job1] 00:16:33.708 filename=/dev/nvme0n2 00:16:33.708 [job2] 00:16:33.708 filename=/dev/nvme0n3 00:16:33.708 [job3] 00:16:33.708 filename=/dev/nvme0n4 00:16:33.708 Could not set queue depth (nvme0n1) 00:16:33.708 Could not set queue depth (nvme0n2) 00:16:33.708 Could not set queue depth (nvme0n3) 00:16:33.708 Could not set queue depth (nvme0n4) 00:16:33.999 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:33.999 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:33.999 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:33.999 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:34.000 fio-3.35 00:16:34.000 Starting 4 threads 00:16:36.522 06:53:50 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:37.086 06:53:51 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:37.086 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1249280, buflen=4096 00:16:37.086 fio: pid=507870, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:37.086 06:53:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:37.086 06:53:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:37.343 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8818688, buflen=4096 00:16:37.343 fio: pid=507869, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:37.343 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=4448256, buflen=4096 00:16:37.343 fio: pid=507867, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:37.343 06:53:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:37.343 06:53:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:37.601 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=4612096, buflen=4096 00:16:37.601 fio: pid=507868, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:16:37.601 06:53:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:37.601 06:53:51 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:37.859 00:16:37.859 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=507867: Wed May 15 06:53:51 2024 00:16:37.859 read: IOPS=316, BW=1264KiB/s (1294kB/s)(4344KiB/3437msec) 00:16:37.859 slat (usec): min=5, max=8746, avg=22.40, stdev=265.02 00:16:37.859 clat (usec): min=364, max=42217, avg=3137.67, stdev=10027.26 00:16:37.859 lat (usec): min=371, max=50964, avg=3160.08, stdev=10064.48 00:16:37.859 clat percentiles (usec): 00:16:37.859 | 1.00th=[ 416], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 441], 00:16:37.859 | 30.00th=[ 453], 40.00th=[ 474], 50.00th=[ 490], 60.00th=[ 510], 00:16:37.859 | 70.00th=[ 529], 80.00th=[ 578], 90.00th=[ 685], 95.00th=[41157], 00:16:37.859 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:37.859 | 99.99th=[42206] 00:16:37.859 bw ( KiB/s): min= 96, max= 2840, per=28.41%, avg=1433.33, stdev=1173.85, samples=6 00:16:37.859 iops : min= 24, max= 710, avg=358.33, stdev=293.46, samples=6 00:16:37.859 lat (usec) : 500=55.38%, 750=36.61%, 1000=1.20% 00:16:37.859 lat (msec) : 2=0.09%, 4=0.09%, 10=0.09%, 50=6.44% 00:16:37.859 cpu : usr=0.38%, sys=0.61%, ctx=1090, majf=0, minf=1 00:16:37.859 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:37.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.859 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.859 issued rwts: total=1087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.859 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:37.859 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=507868: Wed May 15 06:53:51 2024 00:16:37.859 read: IOPS=304, BW=1216KiB/s (1245kB/s)(4504KiB/3704msec) 00:16:37.859 slat (usec): min=5, max=8887, avg=30.78, stdev=425.43 00:16:37.859 clat (usec): min=343, max=43989, avg=3256.12, stdev=10325.08 00:16:37.859 lat (usec): min=348, max=50998, avg=3286.92, stdev=10419.33 00:16:37.859 clat percentiles (usec): 00:16:37.859 | 1.00th=[ 351], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 379], 00:16:37.859 | 30.00th=[ 392], 40.00th=[ 412], 50.00th=[ 474], 60.00th=[ 498], 00:16:37.859 | 70.00th=[ 506], 80.00th=[ 523], 90.00th=[ 603], 95.00th=[41157], 00:16:37.859 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[43779], 00:16:37.859 | 99.99th=[43779] 00:16:37.859 bw ( KiB/s): min= 96, max= 4784, per=25.38%, avg=1280.57, stdev=2036.58, samples=7 00:16:37.859 iops : min= 24, max= 1196, avg=320.14, stdev=509.15, samples=7 00:16:37.859 lat (usec) : 500=62.73%, 750=29.10%, 1000=0.89% 00:16:37.859 lat (msec) : 2=0.27%, 10=0.09%, 50=6.83% 00:16:37.859 cpu : usr=0.19%, sys=0.43%, ctx=1129, majf=0, minf=1 00:16:37.859 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:37.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.859 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.859 issued rwts: total=1127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.859 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:37.859 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=507869: Wed May 15 06:53:51 2024 00:16:37.859 read: IOPS=672, BW=2687KiB/s (2752kB/s)(8612KiB/3205msec) 00:16:37.859 slat (nsec): min=5660, max=78873, avg=19966.24, stdev=10093.50 00:16:37.859 clat (usec): min=353, max=42972, avg=1463.26, stdev=6292.50 00:16:37.859 lat (usec): min=359, max=42993, avg=1483.23, stdev=6293.26 00:16:37.859 clat percentiles (usec): 00:16:37.859 | 1.00th=[ 363], 5.00th=[ 371], 10.00th=[ 383], 20.00th=[ 416], 00:16:37.859 | 30.00th=[ 437], 40.00th=[ 449], 50.00th=[ 461], 60.00th=[ 474], 00:16:37.859 | 70.00th=[ 498], 80.00th=[ 529], 90.00th=[ 603], 95.00th=[ 660], 00:16:37.859 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:37.859 | 99.99th=[42730] 00:16:37.859 bw ( KiB/s): min= 96, max= 5904, per=56.79%, avg=2864.00, stdev=3038.93, samples=6 00:16:37.859 iops : min= 24, max= 1476, avg=716.00, stdev=759.73, samples=6 00:16:37.859 lat (usec) : 500=70.61%, 750=26.56%, 1000=0.32% 00:16:37.859 lat (msec) : 2=0.05%, 50=2.41% 00:16:37.859 cpu : usr=0.94%, sys=1.50%, ctx=2156, majf=0, minf=1 00:16:37.859 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:37.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.859 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.859 issued rwts: total=2154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.859 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:37.859 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=507870: Wed May 15 06:53:51 2024 00:16:37.859 read: IOPS=104, BW=417KiB/s (427kB/s)(1220KiB/2925msec) 00:16:37.859 slat (nsec): min=6771, max=38852, avg=11011.12, stdev=7987.88 00:16:37.859 clat (usec): min=411, max=42294, avg=9575.45, stdev=16979.38 00:16:37.859 lat (usec): min=418, max=42308, avg=9586.44, stdev=16985.78 00:16:37.859 clat percentiles (usec): 00:16:37.859 | 1.00th=[ 416], 5.00th=[ 424], 10.00th=[ 441], 20.00th=[ 478], 00:16:37.859 | 30.00th=[ 498], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 502], 00:16:37.859 | 70.00th=[ 529], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:37.859 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:37.859 | 99.99th=[42206] 00:16:37.859 bw ( KiB/s): min= 96, max= 1960, per=9.32%, avg=470.40, stdev=832.72, samples=5 00:16:37.859 iops : min= 24, max= 490, avg=117.60, stdev=208.18, samples=5 00:16:37.859 lat (usec) : 500=51.96%, 750=24.51%, 1000=0.65% 00:16:37.859 lat (msec) : 2=0.33%, 50=22.22% 00:16:37.859 cpu : usr=0.00%, sys=0.24%, ctx=306, majf=0, minf=1 00:16:37.859 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:37.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.859 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.859 issued rwts: total=306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.859 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:37.859 00:16:37.859 Run status group 0 (all jobs): 00:16:37.859 READ: bw=5043KiB/s (5164kB/s), 417KiB/s-2687KiB/s (427kB/s-2752kB/s), io=18.2MiB (19.1MB), run=2925-3704msec 00:16:37.859 00:16:37.859 Disk stats (read/write): 00:16:37.859 nvme0n1: ios=1083/0, merge=0/0, ticks=3267/0, in_queue=3267, util=95.74% 00:16:37.859 nvme0n2: ios=1123/0, merge=0/0, ticks=3533/0, in_queue=3533, util=96.20% 00:16:37.859 nvme0n3: ios=2200/0, merge=0/0, ticks=3843/0, in_queue=3843, util=99.69% 00:16:37.859 nvme0n4: ios=303/0, merge=0/0, ticks=2841/0, in_queue=2841, util=96.75% 00:16:37.859 06:53:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:37.859 06:53:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:38.118 06:53:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:38.118 06:53:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:38.375 06:53:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:38.375 06:53:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:38.633 06:53:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:38.633 06:53:52 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:38.890 06:53:53 -- target/fio.sh@69 -- # fio_status=0 00:16:38.890 06:53:53 -- target/fio.sh@70 -- # wait 507774 00:16:38.890 06:53:53 -- target/fio.sh@70 -- # fio_status=4 00:16:38.890 06:53:53 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:39.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.148 06:53:53 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:39.148 06:53:53 -- common/autotest_common.sh@1198 -- # local i=0 00:16:39.148 06:53:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:39.148 06:53:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.148 06:53:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:39.148 06:53:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.148 06:53:53 -- common/autotest_common.sh@1210 -- # return 0 00:16:39.148 06:53:53 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:39.148 06:53:53 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:39.148 nvmf hotplug test: fio failed as expected 00:16:39.148 06:53:53 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.405 06:53:53 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:39.405 06:53:53 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:39.405 06:53:53 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:39.405 06:53:53 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:39.405 06:53:53 -- target/fio.sh@91 -- # nvmftestfini 00:16:39.405 06:53:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:39.405 06:53:53 -- nvmf/common.sh@116 -- # sync 00:16:39.405 06:53:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:39.405 06:53:53 -- nvmf/common.sh@119 -- # set +e 00:16:39.405 06:53:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:39.405 06:53:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:39.405 rmmod nvme_tcp 00:16:39.405 rmmod nvme_fabrics 00:16:39.405 rmmod nvme_keyring 00:16:39.405 06:53:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:39.405 06:53:53 -- nvmf/common.sh@123 -- # set -e 00:16:39.405 06:53:53 -- nvmf/common.sh@124 -- # return 0 00:16:39.405 06:53:53 -- nvmf/common.sh@477 -- # '[' -n 505692 ']' 00:16:39.405 06:53:53 -- nvmf/common.sh@478 -- # killprocess 505692 00:16:39.405 06:53:53 -- common/autotest_common.sh@926 -- # '[' -z 505692 ']' 00:16:39.405 06:53:53 -- common/autotest_common.sh@930 -- # kill -0 505692 00:16:39.405 06:53:53 -- common/autotest_common.sh@931 -- # uname 00:16:39.405 06:53:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:39.405 06:53:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 505692 00:16:39.405 06:53:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:39.405 06:53:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:39.405 06:53:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 505692' 00:16:39.405 killing process with pid 505692 00:16:39.405 06:53:53 -- common/autotest_common.sh@945 -- # kill 505692 00:16:39.405 06:53:53 -- common/autotest_common.sh@950 -- # wait 505692 00:16:39.664 06:53:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:39.664 06:53:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:39.664 06:53:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:39.664 06:53:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.664 06:53:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:39.664 06:53:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.664 06:53:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.664 06:53:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.199 06:53:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:42.199 00:16:42.199 real 0m24.289s 00:16:42.199 user 1m22.399s 00:16:42.199 sys 0m6.530s 00:16:42.199 06:53:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.199 06:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:42.199 ************************************ 00:16:42.199 END TEST nvmf_fio_target 00:16:42.199 ************************************ 00:16:42.199 06:53:55 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:42.199 06:53:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:42.199 06:53:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:42.199 06:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:42.199 ************************************ 00:16:42.199 START TEST nvmf_bdevio 00:16:42.199 ************************************ 00:16:42.199 06:53:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:42.199 * Looking for test storage... 00:16:42.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:42.199 06:53:55 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:42.199 06:53:55 -- nvmf/common.sh@7 -- # uname -s 00:16:42.199 06:53:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.199 06:53:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.199 06:53:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.199 06:53:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.199 06:53:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.199 06:53:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.199 06:53:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.199 06:53:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.199 06:53:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.199 06:53:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.199 06:53:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.199 06:53:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.199 06:53:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.199 06:53:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.199 06:53:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:42.199 06:53:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:42.199 06:53:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.199 06:53:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.199 06:53:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.199 06:53:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.199 06:53:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.199 06:53:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.199 06:53:55 -- paths/export.sh@5 -- # export PATH 00:16:42.199 06:53:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.199 06:53:55 -- nvmf/common.sh@46 -- # : 0 00:16:42.199 06:53:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:42.199 06:53:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:42.199 06:53:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:42.199 06:53:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.199 06:53:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.199 06:53:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:42.199 06:53:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:42.199 06:53:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:42.199 06:53:55 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:42.199 06:53:55 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:42.199 06:53:55 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:42.199 06:53:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:42.199 06:53:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.199 06:53:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:42.199 06:53:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:42.199 06:53:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:42.199 06:53:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.199 06:53:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.199 06:53:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.199 06:53:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:42.199 06:53:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:42.199 06:53:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:42.199 06:53:55 -- common/autotest_common.sh@10 -- # set +x 00:16:44.728 06:53:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:44.728 06:53:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:44.728 06:53:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:44.728 06:53:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:44.728 06:53:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:44.728 06:53:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:44.728 06:53:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:44.728 06:53:58 -- nvmf/common.sh@294 -- # net_devs=() 00:16:44.728 06:53:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:44.728 06:53:58 -- nvmf/common.sh@295 -- # e810=() 00:16:44.728 06:53:58 -- nvmf/common.sh@295 -- # local -ga e810 00:16:44.728 06:53:58 -- nvmf/common.sh@296 -- # x722=() 00:16:44.728 06:53:58 -- nvmf/common.sh@296 -- # local -ga x722 00:16:44.728 06:53:58 -- nvmf/common.sh@297 -- # mlx=() 00:16:44.728 06:53:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:44.728 06:53:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.728 06:53:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.728 06:53:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.728 06:53:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.728 06:53:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.728 06:53:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.728 06:53:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.728 06:53:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.728 06:53:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.728 06:53:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.728 06:53:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.728 06:53:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:44.728 06:53:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:44.728 06:53:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:44.728 06:53:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:44.728 06:53:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:44.728 06:53:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:44.728 06:53:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:44.728 06:53:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:44.728 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:44.729 06:53:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:44.729 06:53:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:44.729 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:44.729 06:53:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:44.729 06:53:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:44.729 06:53:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.729 06:53:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:44.729 06:53:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.729 06:53:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:44.729 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:44.729 06:53:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.729 06:53:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:44.729 06:53:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.729 06:53:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:44.729 06:53:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.729 06:53:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:44.729 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:44.729 06:53:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.729 06:53:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:44.729 06:53:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:44.729 06:53:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:44.729 06:53:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.729 06:53:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.729 06:53:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.729 06:53:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:44.729 06:53:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.729 06:53:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.729 06:53:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:44.729 06:53:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.729 06:53:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.729 06:53:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:44.729 06:53:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:44.729 06:53:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.729 06:53:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.729 06:53:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.729 06:53:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.729 06:53:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:44.729 06:53:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.729 06:53:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.729 06:53:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.729 06:53:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:44.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:16:44.729 00:16:44.729 --- 10.0.0.2 ping statistics --- 00:16:44.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.729 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:16:44.729 06:53:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:16:44.729 00:16:44.729 --- 10.0.0.1 ping statistics --- 00:16:44.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.729 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:16:44.729 06:53:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.729 06:53:58 -- nvmf/common.sh@410 -- # return 0 00:16:44.729 06:53:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:44.729 06:53:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.729 06:53:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:44.729 06:53:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.729 06:53:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:44.729 06:53:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:44.729 06:53:58 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:44.729 06:53:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:44.729 06:53:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:44.729 06:53:58 -- common/autotest_common.sh@10 -- # set +x 00:16:44.729 06:53:58 -- nvmf/common.sh@469 -- # nvmfpid=510813 00:16:44.729 06:53:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:44.729 06:53:58 -- nvmf/common.sh@470 -- # waitforlisten 510813 00:16:44.729 06:53:58 -- common/autotest_common.sh@819 -- # '[' -z 510813 ']' 00:16:44.729 06:53:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.729 06:53:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:44.729 06:53:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.729 06:53:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:44.729 06:53:58 -- common/autotest_common.sh@10 -- # set +x 00:16:44.729 [2024-05-15 06:53:58.552696] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:44.729 [2024-05-15 06:53:58.552785] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.729 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.729 [2024-05-15 06:53:58.631232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.729 [2024-05-15 06:53:58.741556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:44.729 [2024-05-15 06:53:58.741737] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.729 [2024-05-15 06:53:58.741756] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.729 [2024-05-15 06:53:58.741770] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.729 [2024-05-15 06:53:58.741855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:44.729 [2024-05-15 06:53:58.742311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:44.729 [2024-05-15 06:53:58.742374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:44.729 [2024-05-15 06:53:58.742379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.294 06:53:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:45.294 06:53:59 -- common/autotest_common.sh@852 -- # return 0 00:16:45.294 06:53:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:45.294 06:53:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:45.294 06:53:59 -- common/autotest_common.sh@10 -- # set +x 00:16:45.294 06:53:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.294 06:53:59 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:45.294 06:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:45.294 06:53:59 -- common/autotest_common.sh@10 -- # set +x 00:16:45.294 [2024-05-15 06:53:59.521340] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.552 06:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:45.552 06:53:59 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:45.552 06:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:45.552 06:53:59 -- common/autotest_common.sh@10 -- # set +x 00:16:45.552 Malloc0 00:16:45.552 06:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:45.552 06:53:59 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:45.552 06:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:45.552 06:53:59 -- common/autotest_common.sh@10 -- # set +x 00:16:45.552 06:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:45.552 06:53:59 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:45.552 06:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:45.552 06:53:59 -- common/autotest_common.sh@10 -- # set +x 00:16:45.552 06:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:45.552 06:53:59 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.552 06:53:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:45.552 06:53:59 -- common/autotest_common.sh@10 -- # set +x 00:16:45.552 [2024-05-15 06:53:59.574425] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.552 06:53:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:45.552 06:53:59 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:45.552 06:53:59 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:45.552 06:53:59 -- nvmf/common.sh@520 -- # config=() 00:16:45.552 06:53:59 -- nvmf/common.sh@520 -- # local subsystem config 00:16:45.552 06:53:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:45.552 06:53:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:45.552 { 00:16:45.552 "params": { 00:16:45.552 "name": "Nvme$subsystem", 00:16:45.552 "trtype": "$TEST_TRANSPORT", 00:16:45.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:45.552 "adrfam": "ipv4", 00:16:45.552 "trsvcid": "$NVMF_PORT", 00:16:45.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:45.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:45.552 "hdgst": ${hdgst:-false}, 00:16:45.552 "ddgst": ${ddgst:-false} 00:16:45.552 }, 00:16:45.552 "method": "bdev_nvme_attach_controller" 00:16:45.552 } 00:16:45.552 EOF 00:16:45.552 )") 00:16:45.552 06:53:59 -- nvmf/common.sh@542 -- # cat 00:16:45.552 06:53:59 -- nvmf/common.sh@544 -- # jq . 00:16:45.552 06:53:59 -- nvmf/common.sh@545 -- # IFS=, 00:16:45.552 06:53:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:45.552 "params": { 00:16:45.552 "name": "Nvme1", 00:16:45.552 "trtype": "tcp", 00:16:45.552 "traddr": "10.0.0.2", 00:16:45.552 "adrfam": "ipv4", 00:16:45.552 "trsvcid": "4420", 00:16:45.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:45.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:45.552 "hdgst": false, 00:16:45.552 "ddgst": false 00:16:45.552 }, 00:16:45.552 "method": "bdev_nvme_attach_controller" 00:16:45.552 }' 00:16:45.552 [2024-05-15 06:53:59.616507] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:45.552 [2024-05-15 06:53:59.616576] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510973 ] 00:16:45.552 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.552 [2024-05-15 06:53:59.687079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:45.810 [2024-05-15 06:53:59.801609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.810 [2024-05-15 06:53:59.801658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.810 [2024-05-15 06:53:59.801661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.067 [2024-05-15 06:54:00.060099] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:46.067 [2024-05-15 06:54:00.060147] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:46.067 I/O targets: 00:16:46.067 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:46.067 00:16:46.067 00:16:46.067 CUnit - A unit testing framework for C - Version 2.1-3 00:16:46.067 http://cunit.sourceforge.net/ 00:16:46.067 00:16:46.067 00:16:46.067 Suite: bdevio tests on: Nvme1n1 00:16:46.067 Test: blockdev write read block ...passed 00:16:46.067 Test: blockdev write zeroes read block ...passed 00:16:46.067 Test: blockdev write zeroes read no split ...passed 00:16:46.067 Test: blockdev write zeroes read split ...passed 00:16:46.067 Test: blockdev write zeroes read split partial ...passed 00:16:46.067 Test: blockdev reset ...[2024-05-15 06:54:00.281878] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:46.067 [2024-05-15 06:54:00.281990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bdf180 (9): Bad file descriptor 00:16:46.067 [2024-05-15 06:54:00.296738] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:46.067 passed 00:16:46.067 Test: blockdev write read 8 blocks ...passed 00:16:46.067 Test: blockdev write read size > 128k ...passed 00:16:46.067 Test: blockdev write read invalid size ...passed 00:16:46.324 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:46.324 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:46.324 Test: blockdev write read max offset ...passed 00:16:46.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:46.324 Test: blockdev writev readv 8 blocks ...passed 00:16:46.324 Test: blockdev writev readv 30 x 1block ...passed 00:16:46.324 Test: blockdev writev readv block ...passed 00:16:46.324 Test: blockdev writev readv size > 128k ...passed 00:16:46.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:46.324 Test: blockdev comparev and writev ...[2024-05-15 06:54:00.478406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.324 [2024-05-15 06:54:00.478440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.324 [2024-05-15 06:54:00.478464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.324 [2024-05-15 06:54:00.478481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.324 [2024-05-15 06:54:00.478880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.324 [2024-05-15 06:54:00.478905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:46.324 [2024-05-15 06:54:00.478927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.324 [2024-05-15 06:54:00.478952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:46.324 [2024-05-15 06:54:00.479359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.324 [2024-05-15 06:54:00.479390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:46.324 [2024-05-15 06:54:00.479412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.324 [2024-05-15 06:54:00.479429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:46.324 [2024-05-15 06:54:00.479812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.324 [2024-05-15 06:54:00.479835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:46.324 [2024-05-15 06:54:00.479857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:46.324 [2024-05-15 06:54:00.479873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:46.324 passed 00:16:46.582 Test: blockdev nvme passthru rw ...passed 00:16:46.582 Test: blockdev nvme passthru vendor specific ...[2024-05-15 06:54:00.562386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.582 [2024-05-15 06:54:00.562413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:46.582 [2024-05-15 06:54:00.562649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.582 [2024-05-15 06:54:00.562673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:46.582 [2024-05-15 06:54:00.562898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.582 [2024-05-15 06:54:00.562922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:46.582 [2024-05-15 06:54:00.563160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:46.582 [2024-05-15 06:54:00.563184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:46.582 passed 00:16:46.582 Test: blockdev nvme admin passthru ...passed 00:16:46.582 Test: blockdev copy ...passed 00:16:46.582 00:16:46.582 Run Summary: Type Total Ran Passed Failed Inactive 00:16:46.582 suites 1 1 n/a 0 0 00:16:46.582 tests 23 23 23 0 0 00:16:46.582 asserts 152 152 152 0 n/a 00:16:46.582 00:16:46.582 Elapsed time = 1.123 seconds 00:16:46.840 06:54:00 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.840 06:54:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:46.840 06:54:00 -- common/autotest_common.sh@10 -- # set +x 00:16:46.840 06:54:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:46.840 06:54:00 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:46.840 06:54:00 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:46.840 06:54:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:46.840 06:54:00 -- nvmf/common.sh@116 -- # sync 00:16:46.840 06:54:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:46.840 06:54:00 -- nvmf/common.sh@119 -- # set +e 00:16:46.840 06:54:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:46.840 06:54:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:46.840 rmmod nvme_tcp 00:16:46.840 rmmod nvme_fabrics 00:16:46.840 rmmod nvme_keyring 00:16:46.840 06:54:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:46.840 06:54:00 -- nvmf/common.sh@123 -- # set -e 00:16:46.840 06:54:00 -- nvmf/common.sh@124 -- # return 0 00:16:46.840 06:54:00 -- nvmf/common.sh@477 -- # '[' -n 510813 ']' 00:16:46.840 06:54:00 -- nvmf/common.sh@478 -- # killprocess 510813 00:16:46.840 06:54:00 -- common/autotest_common.sh@926 -- # '[' -z 510813 ']' 00:16:46.840 06:54:00 -- common/autotest_common.sh@930 -- # kill -0 510813 00:16:46.840 06:54:00 -- common/autotest_common.sh@931 -- # uname 00:16:46.840 06:54:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:46.840 06:54:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 510813 00:16:46.840 06:54:00 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:46.840 06:54:00 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:46.840 06:54:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 510813' 00:16:46.840 killing process with pid 510813 00:16:46.841 06:54:00 -- common/autotest_common.sh@945 -- # kill 510813 00:16:46.841 06:54:00 -- common/autotest_common.sh@950 -- # wait 510813 00:16:47.099 06:54:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:47.099 06:54:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:47.099 06:54:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:47.099 06:54:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.099 06:54:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:47.099 06:54:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.099 06:54:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.099 06:54:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.634 06:54:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:49.634 00:16:49.634 real 0m7.378s 00:16:49.634 user 0m12.984s 00:16:49.634 sys 0m2.430s 00:16:49.634 06:54:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.634 06:54:03 -- common/autotest_common.sh@10 -- # set +x 00:16:49.634 ************************************ 00:16:49.634 END TEST nvmf_bdevio 00:16:49.634 ************************************ 00:16:49.634 06:54:03 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:49.634 06:54:03 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:49.634 06:54:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:49.634 06:54:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:49.634 06:54:03 -- common/autotest_common.sh@10 -- # set +x 00:16:49.634 ************************************ 00:16:49.634 START TEST nvmf_bdevio_no_huge 00:16:49.634 ************************************ 00:16:49.634 06:54:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:49.634 * Looking for test storage... 00:16:49.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.634 06:54:03 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.634 06:54:03 -- nvmf/common.sh@7 -- # uname -s 00:16:49.634 06:54:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.634 06:54:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.634 06:54:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.634 06:54:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.634 06:54:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.634 06:54:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.634 06:54:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.634 06:54:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.634 06:54:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.634 06:54:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.634 06:54:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.634 06:54:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.634 06:54:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.634 06:54:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.634 06:54:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.635 06:54:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.635 06:54:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.635 06:54:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.635 06:54:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.635 06:54:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.635 06:54:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.635 06:54:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.635 06:54:03 -- paths/export.sh@5 -- # export PATH 00:16:49.635 06:54:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.635 06:54:03 -- nvmf/common.sh@46 -- # : 0 00:16:49.635 06:54:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:49.635 06:54:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:49.635 06:54:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:49.635 06:54:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.635 06:54:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.635 06:54:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:49.635 06:54:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:49.635 06:54:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:49.635 06:54:03 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:49.635 06:54:03 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:49.635 06:54:03 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:49.635 06:54:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:49.635 06:54:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.635 06:54:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:49.635 06:54:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:49.635 06:54:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:49.635 06:54:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.635 06:54:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.635 06:54:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.635 06:54:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:49.635 06:54:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:49.635 06:54:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:49.635 06:54:03 -- common/autotest_common.sh@10 -- # set +x 00:16:52.196 06:54:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:52.196 06:54:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:52.196 06:54:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:52.196 06:54:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:52.196 06:54:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:52.196 06:54:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:52.196 06:54:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:52.196 06:54:05 -- nvmf/common.sh@294 -- # net_devs=() 00:16:52.196 06:54:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:52.196 06:54:05 -- nvmf/common.sh@295 -- # e810=() 00:16:52.197 06:54:05 -- nvmf/common.sh@295 -- # local -ga e810 00:16:52.197 06:54:05 -- nvmf/common.sh@296 -- # x722=() 00:16:52.197 06:54:05 -- nvmf/common.sh@296 -- # local -ga x722 00:16:52.197 06:54:05 -- nvmf/common.sh@297 -- # mlx=() 00:16:52.197 06:54:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:52.197 06:54:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.197 06:54:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.197 06:54:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.197 06:54:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.197 06:54:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.197 06:54:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.197 06:54:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.197 06:54:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.197 06:54:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.197 06:54:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.197 06:54:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.197 06:54:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:52.197 06:54:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:52.197 06:54:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:52.197 06:54:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:52.197 06:54:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:52.197 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:52.197 06:54:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:52.197 06:54:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:52.197 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:52.197 06:54:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:52.197 06:54:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:52.197 06:54:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.197 06:54:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:52.197 06:54:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.197 06:54:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:52.197 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:52.197 06:54:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.197 06:54:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:52.197 06:54:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.197 06:54:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:52.197 06:54:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.197 06:54:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:52.197 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:52.197 06:54:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.197 06:54:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:52.197 06:54:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:52.197 06:54:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:52.197 06:54:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.197 06:54:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.197 06:54:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.197 06:54:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:52.197 06:54:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.197 06:54:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.197 06:54:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:52.197 06:54:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.197 06:54:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.197 06:54:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:52.197 06:54:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:52.197 06:54:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.197 06:54:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.197 06:54:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.197 06:54:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.197 06:54:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:52.197 06:54:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.197 06:54:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.197 06:54:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.197 06:54:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:52.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:16:52.197 00:16:52.197 --- 10.0.0.2 ping statistics --- 00:16:52.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.197 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:16:52.197 06:54:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:16:52.197 00:16:52.197 --- 10.0.0.1 ping statistics --- 00:16:52.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.197 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:16:52.197 06:54:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.197 06:54:05 -- nvmf/common.sh@410 -- # return 0 00:16:52.197 06:54:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:52.197 06:54:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.197 06:54:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:52.197 06:54:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.197 06:54:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:52.197 06:54:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:52.197 06:54:06 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:52.197 06:54:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:52.197 06:54:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:52.197 06:54:06 -- common/autotest_common.sh@10 -- # set +x 00:16:52.197 06:54:06 -- nvmf/common.sh@469 -- # nvmfpid=513585 00:16:52.197 06:54:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:52.197 06:54:06 -- nvmf/common.sh@470 -- # waitforlisten 513585 00:16:52.197 06:54:06 -- common/autotest_common.sh@819 -- # '[' -z 513585 ']' 00:16:52.197 06:54:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.197 06:54:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:52.197 06:54:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.197 06:54:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:52.198 06:54:06 -- common/autotest_common.sh@10 -- # set +x 00:16:52.198 [2024-05-15 06:54:06.061569] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:52.198 [2024-05-15 06:54:06.061651] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:52.198 [2024-05-15 06:54:06.156539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.198 [2024-05-15 06:54:06.265584] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:52.198 [2024-05-15 06:54:06.265731] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.198 [2024-05-15 06:54:06.265748] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.198 [2024-05-15 06:54:06.265762] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.198 [2024-05-15 06:54:06.265849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:52.198 [2024-05-15 06:54:06.265901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:52.198 [2024-05-15 06:54:06.265958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:52.198 [2024-05-15 06:54:06.265963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.131 06:54:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:53.131 06:54:07 -- common/autotest_common.sh@852 -- # return 0 00:16:53.131 06:54:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:53.131 06:54:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:53.131 06:54:07 -- common/autotest_common.sh@10 -- # set +x 00:16:53.131 06:54:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.131 06:54:07 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.131 06:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.131 06:54:07 -- common/autotest_common.sh@10 -- # set +x 00:16:53.131 [2024-05-15 06:54:07.037532] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.131 06:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.131 06:54:07 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:53.131 06:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.131 06:54:07 -- common/autotest_common.sh@10 -- # set +x 00:16:53.131 Malloc0 00:16:53.131 06:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.131 06:54:07 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:53.131 06:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.131 06:54:07 -- common/autotest_common.sh@10 -- # set +x 00:16:53.131 06:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.131 06:54:07 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:53.131 06:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.131 06:54:07 -- common/autotest_common.sh@10 -- # set +x 00:16:53.131 06:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.131 06:54:07 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.131 06:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.131 06:54:07 -- common/autotest_common.sh@10 -- # set +x 00:16:53.131 [2024-05-15 06:54:07.075528] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.131 06:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.131 06:54:07 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:53.131 06:54:07 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:53.131 06:54:07 -- nvmf/common.sh@520 -- # config=() 00:16:53.131 06:54:07 -- nvmf/common.sh@520 -- # local subsystem config 00:16:53.131 06:54:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:53.131 06:54:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:53.131 { 00:16:53.131 "params": { 00:16:53.131 "name": "Nvme$subsystem", 00:16:53.131 "trtype": "$TEST_TRANSPORT", 00:16:53.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.131 "adrfam": "ipv4", 00:16:53.131 "trsvcid": "$NVMF_PORT", 00:16:53.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.131 "hdgst": ${hdgst:-false}, 00:16:53.131 "ddgst": ${ddgst:-false} 00:16:53.131 }, 00:16:53.131 "method": "bdev_nvme_attach_controller" 00:16:53.131 } 00:16:53.131 EOF 00:16:53.131 )") 00:16:53.131 06:54:07 -- nvmf/common.sh@542 -- # cat 00:16:53.131 06:54:07 -- nvmf/common.sh@544 -- # jq . 00:16:53.131 06:54:07 -- nvmf/common.sh@545 -- # IFS=, 00:16:53.131 06:54:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:53.131 "params": { 00:16:53.131 "name": "Nvme1", 00:16:53.131 "trtype": "tcp", 00:16:53.131 "traddr": "10.0.0.2", 00:16:53.131 "adrfam": "ipv4", 00:16:53.131 "trsvcid": "4420", 00:16:53.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.132 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.132 "hdgst": false, 00:16:53.132 "ddgst": false 00:16:53.132 }, 00:16:53.132 "method": "bdev_nvme_attach_controller" 00:16:53.132 }' 00:16:53.132 [2024-05-15 06:54:07.113883] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:53.132 [2024-05-15 06:54:07.113994] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid513747 ] 00:16:53.132 [2024-05-15 06:54:07.189133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:53.132 [2024-05-15 06:54:07.302310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.132 [2024-05-15 06:54:07.302358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.132 [2024-05-15 06:54:07.302361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.389 [2024-05-15 06:54:07.500766] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:53.389 [2024-05-15 06:54:07.500819] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:53.389 I/O targets: 00:16:53.389 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:53.389 00:16:53.389 00:16:53.389 CUnit - A unit testing framework for C - Version 2.1-3 00:16:53.389 http://cunit.sourceforge.net/ 00:16:53.389 00:16:53.389 00:16:53.389 Suite: bdevio tests on: Nvme1n1 00:16:53.389 Test: blockdev write read block ...passed 00:16:53.389 Test: blockdev write zeroes read block ...passed 00:16:53.389 Test: blockdev write zeroes read no split ...passed 00:16:53.647 Test: blockdev write zeroes read split ...passed 00:16:53.647 Test: blockdev write zeroes read split partial ...passed 00:16:53.647 Test: blockdev reset ...[2024-05-15 06:54:07.732483] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:53.647 [2024-05-15 06:54:07.732587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1bb00 (9): Bad file descriptor 00:16:53.647 [2024-05-15 06:54:07.760290] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:53.647 passed 00:16:53.647 Test: blockdev write read 8 blocks ...passed 00:16:53.647 Test: blockdev write read size > 128k ...passed 00:16:53.647 Test: blockdev write read invalid size ...passed 00:16:53.647 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:53.647 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:53.647 Test: blockdev write read max offset ...passed 00:16:53.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:53.904 Test: blockdev writev readv 8 blocks ...passed 00:16:53.904 Test: blockdev writev readv 30 x 1block ...passed 00:16:53.904 Test: blockdev writev readv block ...passed 00:16:53.904 Test: blockdev writev readv size > 128k ...passed 00:16:53.904 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:53.904 Test: blockdev comparev and writev ...[2024-05-15 06:54:08.062264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.904 [2024-05-15 06:54:08.062300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.904 [2024-05-15 06:54:08.062325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.904 [2024-05-15 06:54:08.062342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:53.904 [2024-05-15 06:54:08.062760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.904 [2024-05-15 06:54:08.062785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:53.904 [2024-05-15 06:54:08.062808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.904 [2024-05-15 06:54:08.062824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:53.904 [2024-05-15 06:54:08.063254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.904 [2024-05-15 06:54:08.063279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:53.904 [2024-05-15 06:54:08.063300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.904 [2024-05-15 06:54:08.063316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:53.904 [2024-05-15 06:54:08.063738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.904 [2024-05-15 06:54:08.063763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:53.904 [2024-05-15 06:54:08.063784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.904 [2024-05-15 06:54:08.063800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:53.904 passed 00:16:54.161 Test: blockdev nvme passthru rw ...passed 00:16:54.161 Test: blockdev nvme passthru vendor specific ...[2024-05-15 06:54:08.148367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.161 [2024-05-15 06:54:08.148395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:54.161 [2024-05-15 06:54:08.148666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.161 [2024-05-15 06:54:08.148689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:54.161 [2024-05-15 06:54:08.148908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.161 [2024-05-15 06:54:08.148940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:54.161 [2024-05-15 06:54:08.149187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:54.161 [2024-05-15 06:54:08.149211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:54.161 passed 00:16:54.161 Test: blockdev nvme admin passthru ...passed 00:16:54.161 Test: blockdev copy ...passed 00:16:54.161 00:16:54.161 Run Summary: Type Total Ran Passed Failed Inactive 00:16:54.161 suites 1 1 n/a 0 0 00:16:54.161 tests 23 23 23 0 0 00:16:54.162 asserts 152 152 152 0 n/a 00:16:54.162 00:16:54.162 Elapsed time = 1.377 seconds 00:16:54.419 06:54:08 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.419 06:54:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.419 06:54:08 -- common/autotest_common.sh@10 -- # set +x 00:16:54.419 06:54:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.419 06:54:08 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:54.419 06:54:08 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:54.419 06:54:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:54.419 06:54:08 -- nvmf/common.sh@116 -- # sync 00:16:54.419 06:54:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:54.419 06:54:08 -- nvmf/common.sh@119 -- # set +e 00:16:54.419 06:54:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:54.419 06:54:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:54.419 rmmod nvme_tcp 00:16:54.419 rmmod nvme_fabrics 00:16:54.419 rmmod nvme_keyring 00:16:54.419 06:54:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:54.419 06:54:08 -- nvmf/common.sh@123 -- # set -e 00:16:54.419 06:54:08 -- nvmf/common.sh@124 -- # return 0 00:16:54.419 06:54:08 -- nvmf/common.sh@477 -- # '[' -n 513585 ']' 00:16:54.419 06:54:08 -- nvmf/common.sh@478 -- # killprocess 513585 00:16:54.419 06:54:08 -- common/autotest_common.sh@926 -- # '[' -z 513585 ']' 00:16:54.419 06:54:08 -- common/autotest_common.sh@930 -- # kill -0 513585 00:16:54.419 06:54:08 -- common/autotest_common.sh@931 -- # uname 00:16:54.419 06:54:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:54.419 06:54:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 513585 00:16:54.677 06:54:08 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:54.677 06:54:08 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:54.677 06:54:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 513585' 00:16:54.677 killing process with pid 513585 00:16:54.677 06:54:08 -- common/autotest_common.sh@945 -- # kill 513585 00:16:54.677 06:54:08 -- common/autotest_common.sh@950 -- # wait 513585 00:16:54.934 06:54:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:54.935 06:54:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:54.935 06:54:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:54.935 06:54:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.935 06:54:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:54.935 06:54:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.935 06:54:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.935 06:54:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.468 06:54:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:57.468 00:16:57.468 real 0m7.841s 00:16:57.468 user 0m14.259s 00:16:57.468 sys 0m2.935s 00:16:57.468 06:54:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.468 06:54:11 -- common/autotest_common.sh@10 -- # set +x 00:16:57.468 ************************************ 00:16:57.468 END TEST nvmf_bdevio_no_huge 00:16:57.468 ************************************ 00:16:57.468 06:54:11 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:57.468 06:54:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:57.468 06:54:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:57.468 06:54:11 -- common/autotest_common.sh@10 -- # set +x 00:16:57.468 ************************************ 00:16:57.468 START TEST nvmf_tls 00:16:57.468 ************************************ 00:16:57.468 06:54:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:57.468 * Looking for test storage... 00:16:57.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.468 06:54:11 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.468 06:54:11 -- nvmf/common.sh@7 -- # uname -s 00:16:57.468 06:54:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.468 06:54:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.468 06:54:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.468 06:54:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.468 06:54:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.468 06:54:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.468 06:54:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.468 06:54:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.468 06:54:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.468 06:54:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.468 06:54:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.468 06:54:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.468 06:54:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.468 06:54:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.468 06:54:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.468 06:54:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.468 06:54:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.468 06:54:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.468 06:54:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.468 06:54:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.468 06:54:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.468 06:54:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.468 06:54:11 -- paths/export.sh@5 -- # export PATH 00:16:57.468 06:54:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.468 06:54:11 -- nvmf/common.sh@46 -- # : 0 00:16:57.468 06:54:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:57.468 06:54:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:57.468 06:54:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:57.468 06:54:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.468 06:54:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.469 06:54:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:57.469 06:54:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:57.469 06:54:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:57.469 06:54:11 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.469 06:54:11 -- target/tls.sh@71 -- # nvmftestinit 00:16:57.469 06:54:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:57.469 06:54:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.469 06:54:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:57.469 06:54:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:57.469 06:54:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:57.469 06:54:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.469 06:54:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.469 06:54:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.469 06:54:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:57.469 06:54:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:57.469 06:54:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:57.469 06:54:11 -- common/autotest_common.sh@10 -- # set +x 00:16:59.369 06:54:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:59.369 06:54:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:59.369 06:54:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:59.369 06:54:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:59.369 06:54:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:59.369 06:54:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:59.369 06:54:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:59.369 06:54:13 -- nvmf/common.sh@294 -- # net_devs=() 00:16:59.369 06:54:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:59.369 06:54:13 -- nvmf/common.sh@295 -- # e810=() 00:16:59.369 06:54:13 -- nvmf/common.sh@295 -- # local -ga e810 00:16:59.369 06:54:13 -- nvmf/common.sh@296 -- # x722=() 00:16:59.369 06:54:13 -- nvmf/common.sh@296 -- # local -ga x722 00:16:59.369 06:54:13 -- nvmf/common.sh@297 -- # mlx=() 00:16:59.369 06:54:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:59.369 06:54:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.369 06:54:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.369 06:54:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.369 06:54:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.369 06:54:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.369 06:54:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.369 06:54:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.369 06:54:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.369 06:54:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.369 06:54:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.369 06:54:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.369 06:54:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:59.369 06:54:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:59.369 06:54:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:59.369 06:54:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:59.369 06:54:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:59.369 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:59.369 06:54:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:59.369 06:54:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:59.369 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:59.369 06:54:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:59.369 06:54:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:59.369 06:54:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.369 06:54:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:59.369 06:54:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.369 06:54:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:59.369 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:59.369 06:54:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.369 06:54:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:59.369 06:54:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.369 06:54:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:59.369 06:54:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.369 06:54:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:59.369 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:59.369 06:54:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.369 06:54:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:59.369 06:54:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:59.369 06:54:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:59.369 06:54:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:59.369 06:54:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.369 06:54:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.369 06:54:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.369 06:54:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:59.369 06:54:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.369 06:54:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.369 06:54:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:59.369 06:54:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.369 06:54:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.369 06:54:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:59.369 06:54:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:59.369 06:54:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.369 06:54:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.627 06:54:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:59.627 06:54:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.627 06:54:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:59.627 06:54:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.627 06:54:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.627 06:54:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.627 06:54:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:59.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:16:59.627 00:16:59.627 --- 10.0.0.2 ping statistics --- 00:16:59.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.627 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:16:59.627 06:54:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:16:59.627 00:16:59.627 --- 10.0.0.1 ping statistics --- 00:16:59.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.627 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:16:59.627 06:54:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.627 06:54:13 -- nvmf/common.sh@410 -- # return 0 00:16:59.627 06:54:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:59.627 06:54:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.627 06:54:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:59.627 06:54:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:59.627 06:54:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.627 06:54:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:59.627 06:54:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:59.627 06:54:13 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:59.627 06:54:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:59.627 06:54:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:59.627 06:54:13 -- common/autotest_common.sh@10 -- # set +x 00:16:59.627 06:54:13 -- nvmf/common.sh@469 -- # nvmfpid=516751 00:16:59.627 06:54:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:59.627 06:54:13 -- nvmf/common.sh@470 -- # waitforlisten 516751 00:16:59.627 06:54:13 -- common/autotest_common.sh@819 -- # '[' -z 516751 ']' 00:16:59.627 06:54:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.627 06:54:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:59.627 06:54:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.627 06:54:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:59.627 06:54:13 -- common/autotest_common.sh@10 -- # set +x 00:16:59.627 [2024-05-15 06:54:13.814999] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:59.627 [2024-05-15 06:54:13.815081] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.627 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.885 [2024-05-15 06:54:13.898625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.885 [2024-05-15 06:54:14.013858] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:59.885 [2024-05-15 06:54:14.014048] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.885 [2024-05-15 06:54:14.014069] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.885 [2024-05-15 06:54:14.014083] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.885 [2024-05-15 06:54:14.014119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.818 06:54:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:00.818 06:54:14 -- common/autotest_common.sh@852 -- # return 0 00:17:00.818 06:54:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:00.818 06:54:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:00.818 06:54:14 -- common/autotest_common.sh@10 -- # set +x 00:17:00.818 06:54:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.818 06:54:14 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:17:00.818 06:54:14 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:00.818 true 00:17:00.818 06:54:14 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:00.818 06:54:14 -- target/tls.sh@82 -- # jq -r .tls_version 00:17:01.076 06:54:15 -- target/tls.sh@82 -- # version=0 00:17:01.076 06:54:15 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:17:01.076 06:54:15 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:01.334 06:54:15 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:01.334 06:54:15 -- target/tls.sh@90 -- # jq -r .tls_version 00:17:01.592 06:54:15 -- target/tls.sh@90 -- # version=13 00:17:01.592 06:54:15 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:17:01.592 06:54:15 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:01.850 06:54:15 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:01.850 06:54:15 -- target/tls.sh@98 -- # jq -r .tls_version 00:17:02.109 06:54:16 -- target/tls.sh@98 -- # version=7 00:17:02.109 06:54:16 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:17:02.109 06:54:16 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:02.109 06:54:16 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:02.368 06:54:16 -- target/tls.sh@105 -- # ktls=false 00:17:02.368 06:54:16 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:17:02.368 06:54:16 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:02.626 06:54:16 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:02.626 06:54:16 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:02.884 06:54:16 -- target/tls.sh@113 -- # ktls=true 00:17:02.884 06:54:16 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:17:02.884 06:54:16 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:03.141 06:54:17 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:03.141 06:54:17 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:17:03.399 06:54:17 -- target/tls.sh@121 -- # ktls=false 00:17:03.399 06:54:17 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:17:03.399 06:54:17 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:17:03.399 06:54:17 -- target/tls.sh@49 -- # local key hash crc 00:17:03.399 06:54:17 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:17:03.399 06:54:17 -- target/tls.sh@51 -- # hash=01 00:17:03.399 06:54:17 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:17:03.399 06:54:17 -- target/tls.sh@52 -- # gzip -1 -c 00:17:03.399 06:54:17 -- target/tls.sh@52 -- # tail -c8 00:17:03.399 06:54:17 -- target/tls.sh@52 -- # head -c 4 00:17:03.399 06:54:17 -- target/tls.sh@52 -- # crc='p$H�' 00:17:03.399 06:54:17 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:03.399 06:54:17 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:17:03.399 06:54:17 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:03.399 06:54:17 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:03.399 06:54:17 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:17:03.399 06:54:17 -- target/tls.sh@49 -- # local key hash crc 00:17:03.399 06:54:17 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:17:03.399 06:54:17 -- target/tls.sh@51 -- # hash=01 00:17:03.399 06:54:17 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:17:03.399 06:54:17 -- target/tls.sh@52 -- # gzip -1 -c 00:17:03.399 06:54:17 -- target/tls.sh@52 -- # tail -c8 00:17:03.399 06:54:17 -- target/tls.sh@52 -- # head -c 4 00:17:03.399 06:54:17 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:17:03.399 06:54:17 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:03.399 06:54:17 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:17:03.399 06:54:17 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:03.399 06:54:17 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:03.399 06:54:17 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:03.399 06:54:17 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:03.399 06:54:17 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:03.399 06:54:17 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:03.399 06:54:17 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:03.399 06:54:17 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:03.399 06:54:17 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:03.656 06:54:17 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:03.914 06:54:18 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:03.914 06:54:18 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:03.914 06:54:18 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:04.171 [2024-05-15 06:54:18.287628] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.171 06:54:18 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:04.429 06:54:18 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:04.686 [2024-05-15 06:54:18.748844] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:04.686 [2024-05-15 06:54:18.749072] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.686 06:54:18 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:04.944 malloc0 00:17:04.944 06:54:19 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:05.201 06:54:19 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:05.459 06:54:19 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:05.459 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.459 Initializing NVMe Controllers 00:17:15.459 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:15.459 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:15.459 Initialization complete. Launching workers. 00:17:15.459 ======================================================== 00:17:15.459 Latency(us) 00:17:15.460 Device Information : IOPS MiB/s Average min max 00:17:15.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7753.50 30.29 8257.00 1170.83 9421.65 00:17:15.460 ======================================================== 00:17:15.460 Total : 7753.50 30.29 8257.00 1170.83 9421.65 00:17:15.460 00:17:15.460 06:54:29 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:15.460 06:54:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:15.460 06:54:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:15.460 06:54:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:15.460 06:54:29 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:17:15.460 06:54:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:15.460 06:54:29 -- target/tls.sh@28 -- # bdevperf_pid=518725 00:17:15.460 06:54:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:15.460 06:54:29 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:15.460 06:54:29 -- target/tls.sh@31 -- # waitforlisten 518725 /var/tmp/bdevperf.sock 00:17:15.460 06:54:29 -- common/autotest_common.sh@819 -- # '[' -z 518725 ']' 00:17:15.460 06:54:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:15.460 06:54:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:15.460 06:54:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:15.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:15.460 06:54:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:15.460 06:54:29 -- common/autotest_common.sh@10 -- # set +x 00:17:15.460 [2024-05-15 06:54:29.639526] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:15.460 [2024-05-15 06:54:29.639611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518725 ] 00:17:15.460 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.717 [2024-05-15 06:54:29.711800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.717 [2024-05-15 06:54:29.818183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.647 06:54:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:16.647 06:54:30 -- common/autotest_common.sh@852 -- # return 0 00:17:16.647 06:54:30 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:16.647 [2024-05-15 06:54:30.847535] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:16.904 TLSTESTn1 00:17:16.904 06:54:30 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:16.904 Running I/O for 10 seconds... 00:17:29.098 00:17:29.098 Latency(us) 00:17:29.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.098 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:29.098 Verification LBA range: start 0x0 length 0x2000 00:17:29.098 TLSTESTn1 : 10.05 1029.61 4.02 0.00 0.00 124031.68 4951.61 145247.19 00:17:29.098 =================================================================================================================== 00:17:29.098 Total : 1029.61 4.02 0.00 0.00 124031.68 4951.61 145247.19 00:17:29.098 0 00:17:29.098 06:54:41 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:29.098 06:54:41 -- target/tls.sh@45 -- # killprocess 518725 00:17:29.098 06:54:41 -- common/autotest_common.sh@926 -- # '[' -z 518725 ']' 00:17:29.098 06:54:41 -- common/autotest_common.sh@930 -- # kill -0 518725 00:17:29.098 06:54:41 -- common/autotest_common.sh@931 -- # uname 00:17:29.098 06:54:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:29.098 06:54:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 518725 00:17:29.098 06:54:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:29.098 06:54:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:29.098 06:54:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 518725' 00:17:29.098 killing process with pid 518725 00:17:29.098 06:54:41 -- common/autotest_common.sh@945 -- # kill 518725 00:17:29.098 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.098 00:17:29.098 Latency(us) 00:17:29.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.098 =================================================================================================================== 00:17:29.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.098 06:54:41 -- common/autotest_common.sh@950 -- # wait 518725 00:17:29.098 06:54:41 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:29.098 06:54:41 -- common/autotest_common.sh@640 -- # local es=0 00:17:29.098 06:54:41 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:29.098 06:54:41 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:29.098 06:54:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:29.098 06:54:41 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:29.098 06:54:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:29.098 06:54:41 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:29.098 06:54:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:29.098 06:54:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:29.098 06:54:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:29.098 06:54:41 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:17:29.098 06:54:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:29.098 06:54:41 -- target/tls.sh@28 -- # bdevperf_pid=520095 00:17:29.098 06:54:41 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:29.098 06:54:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.098 06:54:41 -- target/tls.sh@31 -- # waitforlisten 520095 /var/tmp/bdevperf.sock 00:17:29.098 06:54:41 -- common/autotest_common.sh@819 -- # '[' -z 520095 ']' 00:17:29.098 06:54:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.098 06:54:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.098 06:54:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.098 06:54:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.098 06:54:41 -- common/autotest_common.sh@10 -- # set +x 00:17:29.098 [2024-05-15 06:54:41.476285] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:29.098 [2024-05-15 06:54:41.476369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520095 ] 00:17:29.098 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.098 [2024-05-15 06:54:41.543796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.098 [2024-05-15 06:54:41.647354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.098 06:54:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:29.098 06:54:42 -- common/autotest_common.sh@852 -- # return 0 00:17:29.098 06:54:42 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:29.098 [2024-05-15 06:54:42.674196] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:29.098 [2024-05-15 06:54:42.679631] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:29.098 [2024-05-15 06:54:42.680089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfde870 (107): Transport endpoint is not connected 00:17:29.098 [2024-05-15 06:54:42.681077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfde870 (9): Bad file descriptor 00:17:29.098 [2024-05-15 06:54:42.682076] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:29.098 [2024-05-15 06:54:42.682096] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:29.098 [2024-05-15 06:54:42.682113] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:29.098 request: 00:17:29.098 { 00:17:29.098 "name": "TLSTEST", 00:17:29.098 "trtype": "tcp", 00:17:29.098 "traddr": "10.0.0.2", 00:17:29.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.099 "adrfam": "ipv4", 00:17:29.099 "trsvcid": "4420", 00:17:29.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.099 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:17:29.099 "method": "bdev_nvme_attach_controller", 00:17:29.099 "req_id": 1 00:17:29.099 } 00:17:29.099 Got JSON-RPC error response 00:17:29.099 response: 00:17:29.099 { 00:17:29.099 "code": -32602, 00:17:29.099 "message": "Invalid parameters" 00:17:29.099 } 00:17:29.099 06:54:42 -- target/tls.sh@36 -- # killprocess 520095 00:17:29.099 06:54:42 -- common/autotest_common.sh@926 -- # '[' -z 520095 ']' 00:17:29.099 06:54:42 -- common/autotest_common.sh@930 -- # kill -0 520095 00:17:29.099 06:54:42 -- common/autotest_common.sh@931 -- # uname 00:17:29.099 06:54:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:29.099 06:54:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 520095 00:17:29.099 06:54:42 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:29.099 06:54:42 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:29.099 06:54:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 520095' 00:17:29.099 killing process with pid 520095 00:17:29.099 06:54:42 -- common/autotest_common.sh@945 -- # kill 520095 00:17:29.099 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.099 00:17:29.099 Latency(us) 00:17:29.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.099 =================================================================================================================== 00:17:29.099 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:29.099 06:54:42 -- common/autotest_common.sh@950 -- # wait 520095 00:17:29.099 06:54:42 -- target/tls.sh@37 -- # return 1 00:17:29.099 06:54:42 -- common/autotest_common.sh@643 -- # es=1 00:17:29.099 06:54:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:29.099 06:54:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:29.099 06:54:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:29.099 06:54:42 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:29.099 06:54:42 -- common/autotest_common.sh@640 -- # local es=0 00:17:29.099 06:54:42 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:29.099 06:54:42 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:29.099 06:54:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:29.099 06:54:42 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:29.099 06:54:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:29.099 06:54:42 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:29.099 06:54:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:29.099 06:54:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:29.099 06:54:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:29.099 06:54:42 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:17:29.099 06:54:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:29.099 06:54:42 -- target/tls.sh@28 -- # bdevperf_pid=520244 00:17:29.099 06:54:42 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:29.099 06:54:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.099 06:54:42 -- target/tls.sh@31 -- # waitforlisten 520244 /var/tmp/bdevperf.sock 00:17:29.099 06:54:42 -- common/autotest_common.sh@819 -- # '[' -z 520244 ']' 00:17:29.099 06:54:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.099 06:54:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.099 06:54:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.099 06:54:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.099 06:54:42 -- common/autotest_common.sh@10 -- # set +x 00:17:29.099 [2024-05-15 06:54:43.024385] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:29.099 [2024-05-15 06:54:43.024464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520244 ] 00:17:29.099 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.099 [2024-05-15 06:54:43.096258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.099 [2024-05-15 06:54:43.199879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.033 06:54:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.033 06:54:43 -- common/autotest_common.sh@852 -- # return 0 00:17:30.033 06:54:43 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:30.033 [2024-05-15 06:54:44.186410] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:30.033 [2024-05-15 06:54:44.191915] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:30.033 [2024-05-15 06:54:44.191983] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:30.033 [2024-05-15 06:54:44.192038] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:30.033 [2024-05-15 06:54:44.192554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1957870 (107): Transport endpoint is not connected 00:17:30.033 [2024-05-15 06:54:44.193543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1957870 (9): Bad file descriptor 00:17:30.033 [2024-05-15 06:54:44.194541] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:30.033 [2024-05-15 06:54:44.194562] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:30.033 [2024-05-15 06:54:44.194592] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:30.033 request: 00:17:30.033 { 00:17:30.033 "name": "TLSTEST", 00:17:30.033 "trtype": "tcp", 00:17:30.033 "traddr": "10.0.0.2", 00:17:30.033 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:30.033 "adrfam": "ipv4", 00:17:30.033 "trsvcid": "4420", 00:17:30.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.033 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:17:30.033 "method": "bdev_nvme_attach_controller", 00:17:30.033 "req_id": 1 00:17:30.033 } 00:17:30.033 Got JSON-RPC error response 00:17:30.033 response: 00:17:30.033 { 00:17:30.033 "code": -32602, 00:17:30.033 "message": "Invalid parameters" 00:17:30.033 } 00:17:30.033 06:54:44 -- target/tls.sh@36 -- # killprocess 520244 00:17:30.033 06:54:44 -- common/autotest_common.sh@926 -- # '[' -z 520244 ']' 00:17:30.033 06:54:44 -- common/autotest_common.sh@930 -- # kill -0 520244 00:17:30.033 06:54:44 -- common/autotest_common.sh@931 -- # uname 00:17:30.033 06:54:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:30.033 06:54:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 520244 00:17:30.033 06:54:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:30.033 06:54:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:30.033 06:54:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 520244' 00:17:30.033 killing process with pid 520244 00:17:30.033 06:54:44 -- common/autotest_common.sh@945 -- # kill 520244 00:17:30.033 Received shutdown signal, test time was about 10.000000 seconds 00:17:30.033 00:17:30.033 Latency(us) 00:17:30.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.033 =================================================================================================================== 00:17:30.033 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:30.033 06:54:44 -- common/autotest_common.sh@950 -- # wait 520244 00:17:30.291 06:54:44 -- target/tls.sh@37 -- # return 1 00:17:30.291 06:54:44 -- common/autotest_common.sh@643 -- # es=1 00:17:30.291 06:54:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:30.291 06:54:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:30.291 06:54:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:30.291 06:54:44 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:30.291 06:54:44 -- common/autotest_common.sh@640 -- # local es=0 00:17:30.291 06:54:44 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:30.291 06:54:44 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:30.291 06:54:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:30.291 06:54:44 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:30.291 06:54:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:30.291 06:54:44 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:30.291 06:54:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:30.291 06:54:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:30.291 06:54:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:30.291 06:54:44 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:17:30.291 06:54:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.291 06:54:44 -- target/tls.sh@28 -- # bdevperf_pid=520515 00:17:30.291 06:54:44 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:30.291 06:54:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:30.291 06:54:44 -- target/tls.sh@31 -- # waitforlisten 520515 /var/tmp/bdevperf.sock 00:17:30.291 06:54:44 -- common/autotest_common.sh@819 -- # '[' -z 520515 ']' 00:17:30.291 06:54:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.291 06:54:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.291 06:54:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.291 06:54:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.291 06:54:44 -- common/autotest_common.sh@10 -- # set +x 00:17:30.549 [2024-05-15 06:54:44.539328] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:30.549 [2024-05-15 06:54:44.539406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520515 ] 00:17:30.549 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.549 [2024-05-15 06:54:44.608503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.549 [2024-05-15 06:54:44.711434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.479 06:54:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:31.479 06:54:45 -- common/autotest_common.sh@852 -- # return 0 00:17:31.479 06:54:45 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:31.479 [2024-05-15 06:54:45.703058] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.737 [2024-05-15 06:54:45.715052] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:31.737 [2024-05-15 06:54:45.715103] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:31.737 [2024-05-15 06:54:45.715162] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:31.737 [2024-05-15 06:54:45.716113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b870 (107): Transport endpoint is not connected 00:17:31.737 [2024-05-15 06:54:45.717100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173b870 (9): Bad file descriptor 00:17:31.737 [2024-05-15 06:54:45.718098] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:31.737 [2024-05-15 06:54:45.718119] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:31.737 [2024-05-15 06:54:45.718135] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:31.737 request: 00:17:31.737 { 00:17:31.737 "name": "TLSTEST", 00:17:31.737 "trtype": "tcp", 00:17:31.737 "traddr": "10.0.0.2", 00:17:31.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.737 "adrfam": "ipv4", 00:17:31.737 "trsvcid": "4420", 00:17:31.737 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:31.737 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:17:31.737 "method": "bdev_nvme_attach_controller", 00:17:31.737 "req_id": 1 00:17:31.737 } 00:17:31.737 Got JSON-RPC error response 00:17:31.737 response: 00:17:31.737 { 00:17:31.737 "code": -32602, 00:17:31.737 "message": "Invalid parameters" 00:17:31.737 } 00:17:31.737 06:54:45 -- target/tls.sh@36 -- # killprocess 520515 00:17:31.737 06:54:45 -- common/autotest_common.sh@926 -- # '[' -z 520515 ']' 00:17:31.737 06:54:45 -- common/autotest_common.sh@930 -- # kill -0 520515 00:17:31.737 06:54:45 -- common/autotest_common.sh@931 -- # uname 00:17:31.737 06:54:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:31.737 06:54:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 520515 00:17:31.737 06:54:45 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:31.737 06:54:45 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:31.737 06:54:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 520515' 00:17:31.737 killing process with pid 520515 00:17:31.737 06:54:45 -- common/autotest_common.sh@945 -- # kill 520515 00:17:31.737 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.737 00:17:31.737 Latency(us) 00:17:31.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.737 =================================================================================================================== 00:17:31.738 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.738 06:54:45 -- common/autotest_common.sh@950 -- # wait 520515 00:17:31.996 06:54:46 -- target/tls.sh@37 -- # return 1 00:17:31.996 06:54:46 -- common/autotest_common.sh@643 -- # es=1 00:17:31.996 06:54:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:31.996 06:54:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:31.996 06:54:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:31.996 06:54:46 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:31.996 06:54:46 -- common/autotest_common.sh@640 -- # local es=0 00:17:31.996 06:54:46 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:31.996 06:54:46 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:31.996 06:54:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:31.996 06:54:46 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:31.996 06:54:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:31.996 06:54:46 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:31.996 06:54:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:31.996 06:54:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:31.996 06:54:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:31.996 06:54:46 -- target/tls.sh@23 -- # psk= 00:17:31.996 06:54:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:31.996 06:54:46 -- target/tls.sh@28 -- # bdevperf_pid=520669 00:17:31.996 06:54:46 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:31.996 06:54:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.996 06:54:46 -- target/tls.sh@31 -- # waitforlisten 520669 /var/tmp/bdevperf.sock 00:17:31.996 06:54:46 -- common/autotest_common.sh@819 -- # '[' -z 520669 ']' 00:17:31.996 06:54:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.996 06:54:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:31.996 06:54:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.996 06:54:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:31.996 06:54:46 -- common/autotest_common.sh@10 -- # set +x 00:17:31.996 [2024-05-15 06:54:46.055964] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:31.996 [2024-05-15 06:54:46.056046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid520669 ] 00:17:31.996 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.996 [2024-05-15 06:54:46.122746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.996 [2024-05-15 06:54:46.222982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.929 06:54:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:32.929 06:54:46 -- common/autotest_common.sh@852 -- # return 0 00:17:32.929 06:54:46 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:33.187 [2024-05-15 06:54:47.218899] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:33.187 [2024-05-15 06:54:47.221037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e1330 (9): Bad file descriptor 00:17:33.187 [2024-05-15 06:54:47.222034] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:33.187 [2024-05-15 06:54:47.222055] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:33.187 [2024-05-15 06:54:47.222071] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:33.187 request: 00:17:33.187 { 00:17:33.187 "name": "TLSTEST", 00:17:33.187 "trtype": "tcp", 00:17:33.187 "traddr": "10.0.0.2", 00:17:33.187 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:33.187 "adrfam": "ipv4", 00:17:33.187 "trsvcid": "4420", 00:17:33.187 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.187 "method": "bdev_nvme_attach_controller", 00:17:33.187 "req_id": 1 00:17:33.187 } 00:17:33.187 Got JSON-RPC error response 00:17:33.187 response: 00:17:33.187 { 00:17:33.187 "code": -32602, 00:17:33.187 "message": "Invalid parameters" 00:17:33.187 } 00:17:33.187 06:54:47 -- target/tls.sh@36 -- # killprocess 520669 00:17:33.187 06:54:47 -- common/autotest_common.sh@926 -- # '[' -z 520669 ']' 00:17:33.187 06:54:47 -- common/autotest_common.sh@930 -- # kill -0 520669 00:17:33.187 06:54:47 -- common/autotest_common.sh@931 -- # uname 00:17:33.187 06:54:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:33.187 06:54:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 520669 00:17:33.187 06:54:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:33.187 06:54:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:33.187 06:54:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 520669' 00:17:33.187 killing process with pid 520669 00:17:33.187 06:54:47 -- common/autotest_common.sh@945 -- # kill 520669 00:17:33.187 Received shutdown signal, test time was about 10.000000 seconds 00:17:33.187 00:17:33.187 Latency(us) 00:17:33.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.187 =================================================================================================================== 00:17:33.187 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:33.187 06:54:47 -- common/autotest_common.sh@950 -- # wait 520669 00:17:33.443 06:54:47 -- target/tls.sh@37 -- # return 1 00:17:33.444 06:54:47 -- common/autotest_common.sh@643 -- # es=1 00:17:33.444 06:54:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:33.444 06:54:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:33.444 06:54:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:33.444 06:54:47 -- target/tls.sh@167 -- # killprocess 516751 00:17:33.444 06:54:47 -- common/autotest_common.sh@926 -- # '[' -z 516751 ']' 00:17:33.444 06:54:47 -- common/autotest_common.sh@930 -- # kill -0 516751 00:17:33.444 06:54:47 -- common/autotest_common.sh@931 -- # uname 00:17:33.444 06:54:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:33.444 06:54:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 516751 00:17:33.444 06:54:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:33.444 06:54:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:33.444 06:54:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 516751' 00:17:33.444 killing process with pid 516751 00:17:33.444 06:54:47 -- common/autotest_common.sh@945 -- # kill 516751 00:17:33.444 06:54:47 -- common/autotest_common.sh@950 -- # wait 516751 00:17:33.701 06:54:47 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:33.701 06:54:47 -- target/tls.sh@49 -- # local key hash crc 00:17:33.701 06:54:47 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:33.701 06:54:47 -- target/tls.sh@51 -- # hash=02 00:17:33.701 06:54:47 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:33.701 06:54:47 -- target/tls.sh@52 -- # gzip -1 -c 00:17:33.701 06:54:47 -- target/tls.sh@52 -- # tail -c8 00:17:33.701 06:54:47 -- target/tls.sh@52 -- # head -c 4 00:17:33.701 06:54:47 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:33.701 06:54:47 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:33.701 06:54:47 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:33.701 06:54:47 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:33.701 06:54:47 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:33.701 06:54:47 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:33.701 06:54:47 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:33.701 06:54:47 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:33.701 06:54:47 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:33.701 06:54:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:33.701 06:54:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:33.701 06:54:47 -- common/autotest_common.sh@10 -- # set +x 00:17:33.701 06:54:47 -- nvmf/common.sh@469 -- # nvmfpid=520956 00:17:33.701 06:54:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:33.701 06:54:47 -- nvmf/common.sh@470 -- # waitforlisten 520956 00:17:33.701 06:54:47 -- common/autotest_common.sh@819 -- # '[' -z 520956 ']' 00:17:33.701 06:54:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.701 06:54:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:33.701 06:54:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.701 06:54:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:33.701 06:54:47 -- common/autotest_common.sh@10 -- # set +x 00:17:33.701 [2024-05-15 06:54:47.888168] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:33.701 [2024-05-15 06:54:47.888276] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.701 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.959 [2024-05-15 06:54:47.967860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.959 [2024-05-15 06:54:48.081128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:33.959 [2024-05-15 06:54:48.081303] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.959 [2024-05-15 06:54:48.081322] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.959 [2024-05-15 06:54:48.081336] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.959 [2024-05-15 06:54:48.081366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.891 06:54:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:34.891 06:54:48 -- common/autotest_common.sh@852 -- # return 0 00:17:34.891 06:54:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:34.891 06:54:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:34.891 06:54:48 -- common/autotest_common.sh@10 -- # set +x 00:17:34.892 06:54:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.892 06:54:48 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:34.892 06:54:48 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:34.892 06:54:48 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:34.892 [2024-05-15 06:54:49.057756] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:34.892 06:54:49 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:35.181 06:54:49 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:35.438 [2024-05-15 06:54:49.535044] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.438 [2024-05-15 06:54:49.535269] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.438 06:54:49 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:35.696 malloc0 00:17:35.696 06:54:49 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:35.954 06:54:50 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:36.211 06:54:50 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:36.211 06:54:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:36.211 06:54:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:36.211 06:54:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:36.211 06:54:50 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:17:36.211 06:54:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:36.211 06:54:50 -- target/tls.sh@28 -- # bdevperf_pid=521261 00:17:36.211 06:54:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.211 06:54:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.211 06:54:50 -- target/tls.sh@31 -- # waitforlisten 521261 /var/tmp/bdevperf.sock 00:17:36.211 06:54:50 -- common/autotest_common.sh@819 -- # '[' -z 521261 ']' 00:17:36.211 06:54:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.212 06:54:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:36.212 06:54:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.212 06:54:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:36.212 06:54:50 -- common/autotest_common.sh@10 -- # set +x 00:17:36.212 [2024-05-15 06:54:50.351032] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:36.212 [2024-05-15 06:54:50.351121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521261 ] 00:17:36.212 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.212 [2024-05-15 06:54:50.423169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.469 [2024-05-15 06:54:50.535910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.401 06:54:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:37.401 06:54:51 -- common/autotest_common.sh@852 -- # return 0 00:17:37.401 06:54:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:37.401 [2024-05-15 06:54:51.571448] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.658 TLSTESTn1 00:17:37.658 06:54:51 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:37.658 Running I/O for 10 seconds... 00:17:47.624 00:17:47.625 Latency(us) 00:17:47.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.625 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:47.625 Verification LBA range: start 0x0 length 0x2000 00:17:47.625 TLSTESTn1 : 10.06 1067.48 4.17 0.00 0.00 119666.36 5437.06 118838.61 00:17:47.625 =================================================================================================================== 00:17:47.625 Total : 1067.48 4.17 0.00 0.00 119666.36 5437.06 118838.61 00:17:47.625 0 00:17:47.625 06:55:01 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:47.625 06:55:01 -- target/tls.sh@45 -- # killprocess 521261 00:17:47.625 06:55:01 -- common/autotest_common.sh@926 -- # '[' -z 521261 ']' 00:17:47.625 06:55:01 -- common/autotest_common.sh@930 -- # kill -0 521261 00:17:47.625 06:55:01 -- common/autotest_common.sh@931 -- # uname 00:17:47.625 06:55:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:47.625 06:55:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 521261 00:17:47.882 06:55:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:47.882 06:55:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:47.882 06:55:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 521261' 00:17:47.882 killing process with pid 521261 00:17:47.882 06:55:01 -- common/autotest_common.sh@945 -- # kill 521261 00:17:47.882 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.882 00:17:47.882 Latency(us) 00:17:47.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.882 =================================================================================================================== 00:17:47.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.882 06:55:01 -- common/autotest_common.sh@950 -- # wait 521261 00:17:48.145 06:55:02 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:48.145 06:55:02 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:48.145 06:55:02 -- common/autotest_common.sh@640 -- # local es=0 00:17:48.145 06:55:02 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:48.145 06:55:02 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:48.145 06:55:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:48.145 06:55:02 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:48.145 06:55:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:48.145 06:55:02 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:48.145 06:55:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:48.145 06:55:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:48.145 06:55:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:48.145 06:55:02 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:17:48.145 06:55:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.145 06:55:02 -- target/tls.sh@28 -- # bdevperf_pid=522631 00:17:48.145 06:55:02 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.145 06:55:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.145 06:55:02 -- target/tls.sh@31 -- # waitforlisten 522631 /var/tmp/bdevperf.sock 00:17:48.145 06:55:02 -- common/autotest_common.sh@819 -- # '[' -z 522631 ']' 00:17:48.145 06:55:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.145 06:55:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.145 06:55:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.145 06:55:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.145 06:55:02 -- common/autotest_common.sh@10 -- # set +x 00:17:48.145 [2024-05-15 06:55:02.183865] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:48.145 [2024-05-15 06:55:02.183967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522631 ] 00:17:48.145 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.145 [2024-05-15 06:55:02.251070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.145 [2024-05-15 06:55:02.354280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.087 06:55:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.087 06:55:03 -- common/autotest_common.sh@852 -- # return 0 00:17:49.087 06:55:03 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:49.346 [2024-05-15 06:55:03.329411] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.346 [2024-05-15 06:55:03.329473] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:49.346 request: 00:17:49.346 { 00:17:49.346 "name": "TLSTEST", 00:17:49.346 "trtype": "tcp", 00:17:49.346 "traddr": "10.0.0.2", 00:17:49.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.346 "adrfam": "ipv4", 00:17:49.346 "trsvcid": "4420", 00:17:49.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.346 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:49.346 "method": "bdev_nvme_attach_controller", 00:17:49.346 "req_id": 1 00:17:49.346 } 00:17:49.346 Got JSON-RPC error response 00:17:49.346 response: 00:17:49.346 { 00:17:49.346 "code": -22, 00:17:49.346 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:49.346 } 00:17:49.346 06:55:03 -- target/tls.sh@36 -- # killprocess 522631 00:17:49.346 06:55:03 -- common/autotest_common.sh@926 -- # '[' -z 522631 ']' 00:17:49.346 06:55:03 -- common/autotest_common.sh@930 -- # kill -0 522631 00:17:49.346 06:55:03 -- common/autotest_common.sh@931 -- # uname 00:17:49.346 06:55:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:49.346 06:55:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 522631 00:17:49.346 06:55:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:49.346 06:55:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:49.346 06:55:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 522631' 00:17:49.346 killing process with pid 522631 00:17:49.346 06:55:03 -- common/autotest_common.sh@945 -- # kill 522631 00:17:49.346 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.346 00:17:49.346 Latency(us) 00:17:49.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.346 =================================================================================================================== 00:17:49.346 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:49.346 06:55:03 -- common/autotest_common.sh@950 -- # wait 522631 00:17:49.604 06:55:03 -- target/tls.sh@37 -- # return 1 00:17:49.604 06:55:03 -- common/autotest_common.sh@643 -- # es=1 00:17:49.604 06:55:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:49.604 06:55:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:49.604 06:55:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:49.604 06:55:03 -- target/tls.sh@183 -- # killprocess 520956 00:17:49.604 06:55:03 -- common/autotest_common.sh@926 -- # '[' -z 520956 ']' 00:17:49.604 06:55:03 -- common/autotest_common.sh@930 -- # kill -0 520956 00:17:49.604 06:55:03 -- common/autotest_common.sh@931 -- # uname 00:17:49.604 06:55:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:49.604 06:55:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 520956 00:17:49.604 06:55:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:49.604 06:55:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:49.604 06:55:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 520956' 00:17:49.604 killing process with pid 520956 00:17:49.604 06:55:03 -- common/autotest_common.sh@945 -- # kill 520956 00:17:49.604 06:55:03 -- common/autotest_common.sh@950 -- # wait 520956 00:17:49.862 06:55:03 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:49.862 06:55:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:49.862 06:55:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:49.862 06:55:03 -- common/autotest_common.sh@10 -- # set +x 00:17:49.862 06:55:03 -- nvmf/common.sh@469 -- # nvmfpid=522913 00:17:49.862 06:55:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:49.862 06:55:03 -- nvmf/common.sh@470 -- # waitforlisten 522913 00:17:49.862 06:55:03 -- common/autotest_common.sh@819 -- # '[' -z 522913 ']' 00:17:49.862 06:55:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.862 06:55:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:49.862 06:55:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.862 06:55:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:49.862 06:55:03 -- common/autotest_common.sh@10 -- # set +x 00:17:49.862 [2024-05-15 06:55:03.981907] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:49.862 [2024-05-15 06:55:03.982026] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.862 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.862 [2024-05-15 06:55:04.054172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.119 [2024-05-15 06:55:04.158478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:50.119 [2024-05-15 06:55:04.158634] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.119 [2024-05-15 06:55:04.158650] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.119 [2024-05-15 06:55:04.158662] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.119 [2024-05-15 06:55:04.158689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.052 06:55:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:51.052 06:55:04 -- common/autotest_common.sh@852 -- # return 0 00:17:51.052 06:55:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:51.052 06:55:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:51.052 06:55:04 -- common/autotest_common.sh@10 -- # set +x 00:17:51.052 06:55:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.052 06:55:04 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:51.052 06:55:04 -- common/autotest_common.sh@640 -- # local es=0 00:17:51.052 06:55:04 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:51.052 06:55:04 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:51.052 06:55:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:51.052 06:55:04 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:51.052 06:55:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:51.052 06:55:04 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:51.052 06:55:04 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:51.052 06:55:04 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:51.052 [2024-05-15 06:55:05.246996] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.052 06:55:05 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:51.309 06:55:05 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:51.567 [2024-05-15 06:55:05.728333] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:51.567 [2024-05-15 06:55:05.728564] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.567 06:55:05 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:51.824 malloc0 00:17:51.824 06:55:05 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:52.082 06:55:06 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:52.340 [2024-05-15 06:55:06.446751] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:52.340 [2024-05-15 06:55:06.446790] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:52.340 [2024-05-15 06:55:06.446810] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:52.340 request: 00:17:52.340 { 00:17:52.340 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.340 "host": "nqn.2016-06.io.spdk:host1", 00:17:52.340 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:52.340 "method": "nvmf_subsystem_add_host", 00:17:52.340 "req_id": 1 00:17:52.340 } 00:17:52.340 Got JSON-RPC error response 00:17:52.340 response: 00:17:52.340 { 00:17:52.340 "code": -32603, 00:17:52.340 "message": "Internal error" 00:17:52.340 } 00:17:52.340 06:55:06 -- common/autotest_common.sh@643 -- # es=1 00:17:52.340 06:55:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:52.340 06:55:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:52.340 06:55:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:52.340 06:55:06 -- target/tls.sh@189 -- # killprocess 522913 00:17:52.340 06:55:06 -- common/autotest_common.sh@926 -- # '[' -z 522913 ']' 00:17:52.340 06:55:06 -- common/autotest_common.sh@930 -- # kill -0 522913 00:17:52.340 06:55:06 -- common/autotest_common.sh@931 -- # uname 00:17:52.340 06:55:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:52.340 06:55:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 522913 00:17:52.340 06:55:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:52.340 06:55:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:52.340 06:55:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 522913' 00:17:52.340 killing process with pid 522913 00:17:52.340 06:55:06 -- common/autotest_common.sh@945 -- # kill 522913 00:17:52.340 06:55:06 -- common/autotest_common.sh@950 -- # wait 522913 00:17:52.599 06:55:06 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:52.599 06:55:06 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:52.599 06:55:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:52.599 06:55:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:52.599 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:17:52.599 06:55:06 -- nvmf/common.sh@469 -- # nvmfpid=523228 00:17:52.599 06:55:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:52.599 06:55:06 -- nvmf/common.sh@470 -- # waitforlisten 523228 00:17:52.599 06:55:06 -- common/autotest_common.sh@819 -- # '[' -z 523228 ']' 00:17:52.599 06:55:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.599 06:55:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:52.599 06:55:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.599 06:55:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:52.599 06:55:06 -- common/autotest_common.sh@10 -- # set +x 00:17:52.857 [2024-05-15 06:55:06.841811] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:52.857 [2024-05-15 06:55:06.841905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.857 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.857 [2024-05-15 06:55:06.924306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.857 [2024-05-15 06:55:07.041137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:52.857 [2024-05-15 06:55:07.041307] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.857 [2024-05-15 06:55:07.041326] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.857 [2024-05-15 06:55:07.041341] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.857 [2024-05-15 06:55:07.041380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.790 06:55:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:53.790 06:55:07 -- common/autotest_common.sh@852 -- # return 0 00:17:53.790 06:55:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:53.790 06:55:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:53.790 06:55:07 -- common/autotest_common.sh@10 -- # set +x 00:17:53.790 06:55:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.790 06:55:07 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:53.790 06:55:07 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:53.790 06:55:07 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:54.047 [2024-05-15 06:55:08.113666] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.047 06:55:08 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:54.304 06:55:08 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:54.561 [2024-05-15 06:55:08.635065] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.561 [2024-05-15 06:55:08.635337] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.561 06:55:08 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:54.820 malloc0 00:17:54.820 06:55:08 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:55.082 06:55:09 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:55.383 06:55:09 -- target/tls.sh@197 -- # bdevperf_pid=523644 00:17:55.384 06:55:09 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.384 06:55:09 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.384 06:55:09 -- target/tls.sh@200 -- # waitforlisten 523644 /var/tmp/bdevperf.sock 00:17:55.384 06:55:09 -- common/autotest_common.sh@819 -- # '[' -z 523644 ']' 00:17:55.384 06:55:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.384 06:55:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:55.384 06:55:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.384 06:55:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:55.384 06:55:09 -- common/autotest_common.sh@10 -- # set +x 00:17:55.384 [2024-05-15 06:55:09.512040] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:55.384 [2024-05-15 06:55:09.512115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523644 ] 00:17:55.384 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.384 [2024-05-15 06:55:09.578784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.642 [2024-05-15 06:55:09.685274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.575 06:55:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:56.575 06:55:10 -- common/autotest_common.sh@852 -- # return 0 00:17:56.575 06:55:10 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:56.575 [2024-05-15 06:55:10.765267] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.832 TLSTESTn1 00:17:56.832 06:55:10 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:57.090 06:55:11 -- target/tls.sh@205 -- # tgtconf='{ 00:17:57.090 "subsystems": [ 00:17:57.090 { 00:17:57.090 "subsystem": "iobuf", 00:17:57.090 "config": [ 00:17:57.090 { 00:17:57.090 "method": "iobuf_set_options", 00:17:57.090 "params": { 00:17:57.090 "small_pool_count": 8192, 00:17:57.090 "large_pool_count": 1024, 00:17:57.090 "small_bufsize": 8192, 00:17:57.090 "large_bufsize": 135168 00:17:57.090 } 00:17:57.090 } 00:17:57.090 ] 00:17:57.090 }, 00:17:57.090 { 00:17:57.090 "subsystem": "sock", 00:17:57.090 "config": [ 00:17:57.090 { 00:17:57.090 "method": "sock_impl_set_options", 00:17:57.090 "params": { 00:17:57.090 "impl_name": "posix", 00:17:57.090 "recv_buf_size": 2097152, 00:17:57.090 "send_buf_size": 2097152, 00:17:57.090 "enable_recv_pipe": true, 00:17:57.090 "enable_quickack": false, 00:17:57.090 "enable_placement_id": 0, 00:17:57.090 "enable_zerocopy_send_server": true, 00:17:57.090 "enable_zerocopy_send_client": false, 00:17:57.090 "zerocopy_threshold": 0, 00:17:57.090 "tls_version": 0, 00:17:57.090 "enable_ktls": false 00:17:57.090 } 00:17:57.090 }, 00:17:57.090 { 00:17:57.090 "method": "sock_impl_set_options", 00:17:57.090 "params": { 00:17:57.090 "impl_name": "ssl", 00:17:57.090 "recv_buf_size": 4096, 00:17:57.090 "send_buf_size": 4096, 00:17:57.090 "enable_recv_pipe": true, 00:17:57.090 "enable_quickack": false, 00:17:57.090 "enable_placement_id": 0, 00:17:57.090 "enable_zerocopy_send_server": true, 00:17:57.090 "enable_zerocopy_send_client": false, 00:17:57.090 "zerocopy_threshold": 0, 00:17:57.090 "tls_version": 0, 00:17:57.090 "enable_ktls": false 00:17:57.090 } 00:17:57.090 } 00:17:57.090 ] 00:17:57.090 }, 00:17:57.090 { 00:17:57.090 "subsystem": "vmd", 00:17:57.090 "config": [] 00:17:57.090 }, 00:17:57.090 { 00:17:57.090 "subsystem": "accel", 00:17:57.090 "config": [ 00:17:57.090 { 00:17:57.090 "method": "accel_set_options", 00:17:57.090 "params": { 00:17:57.090 "small_cache_size": 128, 00:17:57.090 "large_cache_size": 16, 00:17:57.090 "task_count": 2048, 00:17:57.090 "sequence_count": 2048, 00:17:57.090 "buf_count": 2048 00:17:57.090 } 00:17:57.090 } 00:17:57.090 ] 00:17:57.090 }, 00:17:57.090 { 00:17:57.090 "subsystem": "bdev", 00:17:57.090 "config": [ 00:17:57.090 { 00:17:57.090 "method": "bdev_set_options", 00:17:57.090 "params": { 00:17:57.090 "bdev_io_pool_size": 65535, 00:17:57.090 "bdev_io_cache_size": 256, 00:17:57.090 "bdev_auto_examine": true, 00:17:57.090 "iobuf_small_cache_size": 128, 00:17:57.090 "iobuf_large_cache_size": 16 00:17:57.090 } 00:17:57.090 }, 00:17:57.090 { 00:17:57.090 "method": "bdev_raid_set_options", 00:17:57.090 "params": { 00:17:57.090 "process_window_size_kb": 1024 00:17:57.090 } 00:17:57.090 }, 00:17:57.090 { 00:17:57.090 "method": "bdev_iscsi_set_options", 00:17:57.090 "params": { 00:17:57.090 "timeout_sec": 30 00:17:57.090 } 00:17:57.090 }, 00:17:57.090 { 00:17:57.091 "method": "bdev_nvme_set_options", 00:17:57.091 "params": { 00:17:57.091 "action_on_timeout": "none", 00:17:57.091 "timeout_us": 0, 00:17:57.091 "timeout_admin_us": 0, 00:17:57.091 "keep_alive_timeout_ms": 10000, 00:17:57.091 "transport_retry_count": 4, 00:17:57.091 "arbitration_burst": 0, 00:17:57.091 "low_priority_weight": 0, 00:17:57.091 "medium_priority_weight": 0, 00:17:57.091 "high_priority_weight": 0, 00:17:57.091 "nvme_adminq_poll_period_us": 10000, 00:17:57.091 "nvme_ioq_poll_period_us": 0, 00:17:57.091 "io_queue_requests": 0, 00:17:57.091 "delay_cmd_submit": true, 00:17:57.091 "bdev_retry_count": 3, 00:17:57.091 "transport_ack_timeout": 0, 00:17:57.091 "ctrlr_loss_timeout_sec": 0, 00:17:57.091 "reconnect_delay_sec": 0, 00:17:57.091 "fast_io_fail_timeout_sec": 0, 00:17:57.091 "generate_uuids": false, 00:17:57.091 "transport_tos": 0, 00:17:57.091 "io_path_stat": false, 00:17:57.091 "allow_accel_sequence": false 00:17:57.091 } 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "method": "bdev_nvme_set_hotplug", 00:17:57.091 "params": { 00:17:57.091 "period_us": 100000, 00:17:57.091 "enable": false 00:17:57.091 } 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "method": "bdev_malloc_create", 00:17:57.091 "params": { 00:17:57.091 "name": "malloc0", 00:17:57.091 "num_blocks": 8192, 00:17:57.091 "block_size": 4096, 00:17:57.091 "physical_block_size": 4096, 00:17:57.091 "uuid": "d1dcbb72-0017-4ea9-89e6-e486598e402c", 00:17:57.091 "optimal_io_boundary": 0 00:17:57.091 } 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "method": "bdev_wait_for_examine" 00:17:57.091 } 00:17:57.091 ] 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "subsystem": "nbd", 00:17:57.091 "config": [] 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "subsystem": "scheduler", 00:17:57.091 "config": [ 00:17:57.091 { 00:17:57.091 "method": "framework_set_scheduler", 00:17:57.091 "params": { 00:17:57.091 "name": "static" 00:17:57.091 } 00:17:57.091 } 00:17:57.091 ] 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "subsystem": "nvmf", 00:17:57.091 "config": [ 00:17:57.091 { 00:17:57.091 "method": "nvmf_set_config", 00:17:57.091 "params": { 00:17:57.091 "discovery_filter": "match_any", 00:17:57.091 "admin_cmd_passthru": { 00:17:57.091 "identify_ctrlr": false 00:17:57.091 } 00:17:57.091 } 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "method": "nvmf_set_max_subsystems", 00:17:57.091 "params": { 00:17:57.091 "max_subsystems": 1024 00:17:57.091 } 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "method": "nvmf_set_crdt", 00:17:57.091 "params": { 00:17:57.091 "crdt1": 0, 00:17:57.091 "crdt2": 0, 00:17:57.091 "crdt3": 0 00:17:57.091 } 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "method": "nvmf_create_transport", 00:17:57.091 "params": { 00:17:57.091 "trtype": "TCP", 00:17:57.091 "max_queue_depth": 128, 00:17:57.091 "max_io_qpairs_per_ctrlr": 127, 00:17:57.091 "in_capsule_data_size": 4096, 00:17:57.091 "max_io_size": 131072, 00:17:57.091 "io_unit_size": 131072, 00:17:57.091 "max_aq_depth": 128, 00:17:57.091 "num_shared_buffers": 511, 00:17:57.091 "buf_cache_size": 4294967295, 00:17:57.091 "dif_insert_or_strip": false, 00:17:57.091 "zcopy": false, 00:17:57.091 "c2h_success": false, 00:17:57.091 "sock_priority": 0, 00:17:57.091 "abort_timeout_sec": 1 00:17:57.091 } 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "method": "nvmf_create_subsystem", 00:17:57.091 "params": { 00:17:57.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.091 "allow_any_host": false, 00:17:57.091 "serial_number": "SPDK00000000000001", 00:17:57.091 "model_number": "SPDK bdev Controller", 00:17:57.091 "max_namespaces": 10, 00:17:57.091 "min_cntlid": 1, 00:17:57.091 "max_cntlid": 65519, 00:17:57.091 "ana_reporting": false 00:17:57.091 } 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "method": "nvmf_subsystem_add_host", 00:17:57.091 "params": { 00:17:57.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.091 "host": "nqn.2016-06.io.spdk:host1", 00:17:57.091 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:57.091 } 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "method": "nvmf_subsystem_add_ns", 00:17:57.091 "params": { 00:17:57.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.091 "namespace": { 00:17:57.091 "nsid": 1, 00:17:57.091 "bdev_name": "malloc0", 00:17:57.091 "nguid": "D1DCBB7200174EA989E6E486598E402C", 00:17:57.091 "uuid": "d1dcbb72-0017-4ea9-89e6-e486598e402c" 00:17:57.091 } 00:17:57.091 } 00:17:57.091 }, 00:17:57.091 { 00:17:57.091 "method": "nvmf_subsystem_add_listener", 00:17:57.091 "params": { 00:17:57.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.091 "listen_address": { 00:17:57.091 "trtype": "TCP", 00:17:57.091 "adrfam": "IPv4", 00:17:57.091 "traddr": "10.0.0.2", 00:17:57.091 "trsvcid": "4420" 00:17:57.091 }, 00:17:57.091 "secure_channel": true 00:17:57.091 } 00:17:57.091 } 00:17:57.091 ] 00:17:57.091 } 00:17:57.091 ] 00:17:57.091 }' 00:17:57.091 06:55:11 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:57.348 06:55:11 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:57.348 "subsystems": [ 00:17:57.348 { 00:17:57.348 "subsystem": "iobuf", 00:17:57.348 "config": [ 00:17:57.348 { 00:17:57.348 "method": "iobuf_set_options", 00:17:57.348 "params": { 00:17:57.348 "small_pool_count": 8192, 00:17:57.348 "large_pool_count": 1024, 00:17:57.348 "small_bufsize": 8192, 00:17:57.348 "large_bufsize": 135168 00:17:57.348 } 00:17:57.348 } 00:17:57.348 ] 00:17:57.348 }, 00:17:57.348 { 00:17:57.348 "subsystem": "sock", 00:17:57.348 "config": [ 00:17:57.348 { 00:17:57.348 "method": "sock_impl_set_options", 00:17:57.348 "params": { 00:17:57.348 "impl_name": "posix", 00:17:57.348 "recv_buf_size": 2097152, 00:17:57.348 "send_buf_size": 2097152, 00:17:57.348 "enable_recv_pipe": true, 00:17:57.348 "enable_quickack": false, 00:17:57.348 "enable_placement_id": 0, 00:17:57.348 "enable_zerocopy_send_server": true, 00:17:57.348 "enable_zerocopy_send_client": false, 00:17:57.348 "zerocopy_threshold": 0, 00:17:57.348 "tls_version": 0, 00:17:57.348 "enable_ktls": false 00:17:57.348 } 00:17:57.348 }, 00:17:57.348 { 00:17:57.349 "method": "sock_impl_set_options", 00:17:57.349 "params": { 00:17:57.349 "impl_name": "ssl", 00:17:57.349 "recv_buf_size": 4096, 00:17:57.349 "send_buf_size": 4096, 00:17:57.349 "enable_recv_pipe": true, 00:17:57.349 "enable_quickack": false, 00:17:57.349 "enable_placement_id": 0, 00:17:57.349 "enable_zerocopy_send_server": true, 00:17:57.349 "enable_zerocopy_send_client": false, 00:17:57.349 "zerocopy_threshold": 0, 00:17:57.349 "tls_version": 0, 00:17:57.349 "enable_ktls": false 00:17:57.349 } 00:17:57.349 } 00:17:57.349 ] 00:17:57.349 }, 00:17:57.349 { 00:17:57.349 "subsystem": "vmd", 00:17:57.349 "config": [] 00:17:57.349 }, 00:17:57.349 { 00:17:57.349 "subsystem": "accel", 00:17:57.349 "config": [ 00:17:57.349 { 00:17:57.349 "method": "accel_set_options", 00:17:57.349 "params": { 00:17:57.349 "small_cache_size": 128, 00:17:57.349 "large_cache_size": 16, 00:17:57.349 "task_count": 2048, 00:17:57.349 "sequence_count": 2048, 00:17:57.349 "buf_count": 2048 00:17:57.349 } 00:17:57.349 } 00:17:57.349 ] 00:17:57.349 }, 00:17:57.349 { 00:17:57.349 "subsystem": "bdev", 00:17:57.349 "config": [ 00:17:57.349 { 00:17:57.349 "method": "bdev_set_options", 00:17:57.349 "params": { 00:17:57.349 "bdev_io_pool_size": 65535, 00:17:57.349 "bdev_io_cache_size": 256, 00:17:57.349 "bdev_auto_examine": true, 00:17:57.349 "iobuf_small_cache_size": 128, 00:17:57.349 "iobuf_large_cache_size": 16 00:17:57.349 } 00:17:57.349 }, 00:17:57.349 { 00:17:57.349 "method": "bdev_raid_set_options", 00:17:57.349 "params": { 00:17:57.349 "process_window_size_kb": 1024 00:17:57.349 } 00:17:57.349 }, 00:17:57.349 { 00:17:57.349 "method": "bdev_iscsi_set_options", 00:17:57.349 "params": { 00:17:57.349 "timeout_sec": 30 00:17:57.349 } 00:17:57.349 }, 00:17:57.349 { 00:17:57.349 "method": "bdev_nvme_set_options", 00:17:57.349 "params": { 00:17:57.349 "action_on_timeout": "none", 00:17:57.349 "timeout_us": 0, 00:17:57.349 "timeout_admin_us": 0, 00:17:57.349 "keep_alive_timeout_ms": 10000, 00:17:57.349 "transport_retry_count": 4, 00:17:57.349 "arbitration_burst": 0, 00:17:57.349 "low_priority_weight": 0, 00:17:57.349 "medium_priority_weight": 0, 00:17:57.349 "high_priority_weight": 0, 00:17:57.349 "nvme_adminq_poll_period_us": 10000, 00:17:57.349 "nvme_ioq_poll_period_us": 0, 00:17:57.349 "io_queue_requests": 512, 00:17:57.349 "delay_cmd_submit": true, 00:17:57.349 "bdev_retry_count": 3, 00:17:57.349 "transport_ack_timeout": 0, 00:17:57.349 "ctrlr_loss_timeout_sec": 0, 00:17:57.349 "reconnect_delay_sec": 0, 00:17:57.349 "fast_io_fail_timeout_sec": 0, 00:17:57.349 "generate_uuids": false, 00:17:57.349 "transport_tos": 0, 00:17:57.349 "io_path_stat": false, 00:17:57.349 "allow_accel_sequence": false 00:17:57.349 } 00:17:57.349 }, 00:17:57.349 { 00:17:57.349 "method": "bdev_nvme_attach_controller", 00:17:57.349 "params": { 00:17:57.349 "name": "TLSTEST", 00:17:57.349 "trtype": "TCP", 00:17:57.349 "adrfam": "IPv4", 00:17:57.349 "traddr": "10.0.0.2", 00:17:57.349 "trsvcid": "4420", 00:17:57.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.349 "prchk_reftag": false, 00:17:57.349 "prchk_guard": false, 00:17:57.349 "ctrlr_loss_timeout_sec": 0, 00:17:57.349 "reconnect_delay_sec": 0, 00:17:57.349 "fast_io_fail_timeout_sec": 0, 00:17:57.349 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:57.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.349 "hdgst": false, 00:17:57.349 "ddgst": false 00:17:57.349 } 00:17:57.349 }, 00:17:57.349 { 00:17:57.349 "method": "bdev_nvme_set_hotplug", 00:17:57.349 "params": { 00:17:57.349 "period_us": 100000, 00:17:57.349 "enable": false 00:17:57.349 } 00:17:57.349 }, 00:17:57.349 { 00:17:57.349 "method": "bdev_wait_for_examine" 00:17:57.349 } 00:17:57.349 ] 00:17:57.349 }, 00:17:57.349 { 00:17:57.349 "subsystem": "nbd", 00:17:57.349 "config": [] 00:17:57.349 } 00:17:57.349 ] 00:17:57.349 }' 00:17:57.349 06:55:11 -- target/tls.sh@208 -- # killprocess 523644 00:17:57.349 06:55:11 -- common/autotest_common.sh@926 -- # '[' -z 523644 ']' 00:17:57.349 06:55:11 -- common/autotest_common.sh@930 -- # kill -0 523644 00:17:57.349 06:55:11 -- common/autotest_common.sh@931 -- # uname 00:17:57.349 06:55:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:57.349 06:55:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 523644 00:17:57.349 06:55:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:57.349 06:55:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:57.349 06:55:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 523644' 00:17:57.349 killing process with pid 523644 00:17:57.349 06:55:11 -- common/autotest_common.sh@945 -- # kill 523644 00:17:57.349 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.349 00:17:57.349 Latency(us) 00:17:57.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.349 =================================================================================================================== 00:17:57.349 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.349 06:55:11 -- common/autotest_common.sh@950 -- # wait 523644 00:17:57.606 06:55:11 -- target/tls.sh@209 -- # killprocess 523228 00:17:57.606 06:55:11 -- common/autotest_common.sh@926 -- # '[' -z 523228 ']' 00:17:57.606 06:55:11 -- common/autotest_common.sh@930 -- # kill -0 523228 00:17:57.606 06:55:11 -- common/autotest_common.sh@931 -- # uname 00:17:57.606 06:55:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:57.606 06:55:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 523228 00:17:57.606 06:55:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:57.606 06:55:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:57.606 06:55:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 523228' 00:17:57.606 killing process with pid 523228 00:17:57.606 06:55:11 -- common/autotest_common.sh@945 -- # kill 523228 00:17:57.606 06:55:11 -- common/autotest_common.sh@950 -- # wait 523228 00:17:58.172 06:55:12 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:58.172 06:55:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:58.172 06:55:12 -- target/tls.sh@212 -- # echo '{ 00:17:58.172 "subsystems": [ 00:17:58.172 { 00:17:58.172 "subsystem": "iobuf", 00:17:58.172 "config": [ 00:17:58.172 { 00:17:58.172 "method": "iobuf_set_options", 00:17:58.172 "params": { 00:17:58.172 "small_pool_count": 8192, 00:17:58.172 "large_pool_count": 1024, 00:17:58.172 "small_bufsize": 8192, 00:17:58.172 "large_bufsize": 135168 00:17:58.172 } 00:17:58.172 } 00:17:58.172 ] 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "subsystem": "sock", 00:17:58.172 "config": [ 00:17:58.172 { 00:17:58.172 "method": "sock_impl_set_options", 00:17:58.172 "params": { 00:17:58.172 "impl_name": "posix", 00:17:58.172 "recv_buf_size": 2097152, 00:17:58.172 "send_buf_size": 2097152, 00:17:58.172 "enable_recv_pipe": true, 00:17:58.172 "enable_quickack": false, 00:17:58.172 "enable_placement_id": 0, 00:17:58.172 "enable_zerocopy_send_server": true, 00:17:58.172 "enable_zerocopy_send_client": false, 00:17:58.172 "zerocopy_threshold": 0, 00:17:58.172 "tls_version": 0, 00:17:58.172 "enable_ktls": false 00:17:58.172 } 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "method": "sock_impl_set_options", 00:17:58.172 "params": { 00:17:58.172 "impl_name": "ssl", 00:17:58.172 "recv_buf_size": 4096, 00:17:58.172 "send_buf_size": 4096, 00:17:58.172 "enable_recv_pipe": true, 00:17:58.172 "enable_quickack": false, 00:17:58.172 "enable_placement_id": 0, 00:17:58.172 "enable_zerocopy_send_server": true, 00:17:58.172 "enable_zerocopy_send_client": false, 00:17:58.172 "zerocopy_threshold": 0, 00:17:58.172 "tls_version": 0, 00:17:58.172 "enable_ktls": false 00:17:58.172 } 00:17:58.172 } 00:17:58.172 ] 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "subsystem": "vmd", 00:17:58.172 "config": [] 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "subsystem": "accel", 00:17:58.172 "config": [ 00:17:58.172 { 00:17:58.172 "method": "accel_set_options", 00:17:58.172 "params": { 00:17:58.172 "small_cache_size": 128, 00:17:58.172 "large_cache_size": 16, 00:17:58.172 "task_count": 2048, 00:17:58.172 "sequence_count": 2048, 00:17:58.172 "buf_count": 2048 00:17:58.172 } 00:17:58.172 } 00:17:58.172 ] 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "subsystem": "bdev", 00:17:58.172 "config": [ 00:17:58.172 { 00:17:58.172 "method": "bdev_set_options", 00:17:58.172 "params": { 00:17:58.172 "bdev_io_pool_size": 65535, 00:17:58.172 "bdev_io_cache_size": 256, 00:17:58.172 "bdev_auto_examine": true, 00:17:58.172 "iobuf_small_cache_size": 128, 00:17:58.172 "iobuf_large_cache_size": 16 00:17:58.172 } 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "method": "bdev_raid_set_options", 00:17:58.172 "params": { 00:17:58.172 "process_window_size_kb": 1024 00:17:58.172 } 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "method": "bdev_iscsi_set_options", 00:17:58.172 "params": { 00:17:58.172 "timeout_sec": 30 00:17:58.172 } 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "method": "bdev_nvme_set_options", 00:17:58.172 "params": { 00:17:58.172 "action_on_timeout": "none", 00:17:58.172 "timeout_us": 0, 00:17:58.172 "timeout_admin_us": 0, 00:17:58.172 "keep_alive_timeout_ms": 10000, 00:17:58.172 "transport_retry_count": 4, 00:17:58.172 "arbitration_burst": 0, 00:17:58.172 "low_priority_weight": 0, 00:17:58.172 "medium_priority_weight": 0, 00:17:58.172 "high_priority_weight": 0, 00:17:58.172 "nvme_adminq_poll_period_us": 10000, 00:17:58.172 "nvme_ioq_poll_period_us": 0, 00:17:58.172 "io_queue_requests": 0, 00:17:58.172 "delay_cmd_submit": true, 00:17:58.172 "bdev_retry_count": 3, 00:17:58.172 "transport_ack_timeout": 0, 00:17:58.172 "ctrlr_loss_timeout_sec": 0, 00:17:58.172 "reconnect_delay_sec": 0, 00:17:58.172 "fast_io_fail_timeout_sec": 0, 00:17:58.172 "generate_uuids": false, 00:17:58.172 "transport_tos": 0, 00:17:58.172 "io_path_stat": false, 00:17:58.172 "allow_accel_sequence": false 00:17:58.172 } 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "method": "bdev_nvme_set_hotplug", 00:17:58.172 "params": { 00:17:58.172 "period_us": 100000, 00:17:58.172 "enable": false 00:17:58.172 } 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "method": "bdev_malloc_create", 00:17:58.172 "params": { 00:17:58.172 "name": "malloc0", 00:17:58.172 "num_blocks": 8192, 00:17:58.172 "block_size": 4096, 00:17:58.172 "physical_block_size": 4096, 00:17:58.172 "uuid": "d1dcbb72-0017-4ea9-89e6-e486598e402c", 00:17:58.172 "optimal_io_boundary": 0 00:17:58.172 } 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "method": "bdev_wait_for_examine" 00:17:58.172 } 00:17:58.172 ] 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "subsystem": "nbd", 00:17:58.172 "config": [] 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "subsystem": "scheduler", 00:17:58.172 "config": [ 00:17:58.172 { 00:17:58.172 "method": "framework_set_scheduler", 00:17:58.172 "params": { 00:17:58.172 "name": "static" 00:17:58.172 } 00:17:58.172 } 00:17:58.172 ] 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "subsystem": "nvmf", 00:17:58.172 "config": [ 00:17:58.172 { 00:17:58.172 "method": "nvmf_set_config", 00:17:58.172 "params": { 00:17:58.172 "discovery_filter": "match_any", 00:17:58.172 "admin_cmd_passthru": { 00:17:58.172 "identify_ctrlr": false 00:17:58.172 } 00:17:58.172 } 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "method": "nvmf_set_max_subsystems", 00:17:58.172 "params": { 00:17:58.172 "max_subsystems": 1024 00:17:58.172 } 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "method": "nvmf_set_crdt", 00:17:58.172 "params": { 00:17:58.172 "crdt1": 0, 00:17:58.172 "crdt2": 0, 00:17:58.172 "crdt3": 0 00:17:58.172 } 00:17:58.172 }, 00:17:58.172 { 00:17:58.172 "method": "nvmf_create_transport", 00:17:58.172 "params": { 00:17:58.172 "trtype": "TCP", 00:17:58.172 "max_queue_depth": 128, 00:17:58.172 "max_io_qpairs_per_ctrlr": 127, 00:17:58.172 "in_capsule_data_size": 4096, 00:17:58.172 "max_io_size": 131072, 00:17:58.172 "io_unit_size": 131072, 00:17:58.172 "max_aq_depth": 128, 00:17:58.172 "num_shared_buffers": 511, 00:17:58.172 "buf_cache_size": 4294967295, 00:17:58.172 "dif_insert_or_strip": false, 00:17:58.172 "zcopy": false, 00:17:58.172 "c2h_success": false, 00:17:58.173 "sock_priority": 0, 00:17:58.173 "abort_timeout_sec": 1 00:17:58.173 } 00:17:58.173 }, 00:17:58.173 { 00:17:58.173 "method": "nvmf_create_subsystem", 00:17:58.173 "params": { 00:17:58.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.173 "allow_any_host": false, 00:17:58.173 "serial_number": "SPDK00000000000001", 00:17:58.173 "model_number": "SPDK bdev Controller", 00:17:58.173 "max_namespaces": 10, 00:17:58.173 "min_cntlid": 1, 00:17:58.173 "max_cntlid": 65519, 00:17:58.173 "ana_reporting": false 00:17:58.173 } 00:17:58.173 }, 00:17:58.173 { 00:17:58.173 "method": "nvmf_subsystem_add_host", 00:17:58.173 "params": { 00:17:58.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.173 "host": "nqn.2016-06.io.spdk:host1", 00:17:58.173 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:58.173 } 00:17:58.173 }, 00:17:58.173 { 00:17:58.173 "method": "nvmf_subsystem_add_ns", 00:17:58.173 "params": { 00:17:58.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.173 "namespace": { 00:17:58.173 "nsid": 1, 00:17:58.173 "bdev_name": "malloc0", 00:17:58.173 "nguid": "D1DCBB7200174EA989E6E486598E402C", 00:17:58.173 "uuid": "d1dcbb72-0017-4ea9-89e6-e486598e402c" 00:17:58.173 } 00:17:58.173 } 00:17:58.173 }, 00:17:58.173 { 00:17:58.173 "method": "nvmf_subsystem_add_listener", 00:17:58.173 "params": { 00:17:58.173 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.173 "listen_address": { 00:17:58.173 "trtype": "TCP", 00:17:58.173 "adrfam": "IPv4", 00:17:58.173 "traddr": "10.0.0.2", 00:17:58.173 "trsvcid": "4420" 00:17:58.173 }, 00:17:58.173 "secure_channel": true 00:17:58.173 } 00:17:58.173 } 00:17:58.173 ] 00:17:58.173 } 00:17:58.173 ] 00:17:58.173 }' 00:17:58.173 06:55:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:58.173 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:17:58.173 06:55:12 -- nvmf/common.sh@469 -- # nvmfpid=523945 00:17:58.173 06:55:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:58.173 06:55:12 -- nvmf/common.sh@470 -- # waitforlisten 523945 00:17:58.173 06:55:12 -- common/autotest_common.sh@819 -- # '[' -z 523945 ']' 00:17:58.173 06:55:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.173 06:55:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:58.173 06:55:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.173 06:55:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:58.173 06:55:12 -- common/autotest_common.sh@10 -- # set +x 00:17:58.173 [2024-05-15 06:55:12.149337] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:58.173 [2024-05-15 06:55:12.149416] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.173 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.173 [2024-05-15 06:55:12.227795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.173 [2024-05-15 06:55:12.346292] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:58.173 [2024-05-15 06:55:12.346462] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.173 [2024-05-15 06:55:12.346480] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.173 [2024-05-15 06:55:12.346508] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.173 [2024-05-15 06:55:12.346536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.431 [2024-05-15 06:55:12.574915] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.431 [2024-05-15 06:55:12.606937] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:58.431 [2024-05-15 06:55:12.607189] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.996 06:55:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:58.996 06:55:13 -- common/autotest_common.sh@852 -- # return 0 00:17:58.996 06:55:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:58.996 06:55:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:58.996 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:17:58.996 06:55:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.996 06:55:13 -- target/tls.sh@216 -- # bdevperf_pid=524103 00:17:58.996 06:55:13 -- target/tls.sh@217 -- # waitforlisten 524103 /var/tmp/bdevperf.sock 00:17:58.996 06:55:13 -- common/autotest_common.sh@819 -- # '[' -z 524103 ']' 00:17:58.996 06:55:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.996 06:55:13 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:58.996 06:55:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:58.996 06:55:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.996 06:55:13 -- target/tls.sh@213 -- # echo '{ 00:17:58.996 "subsystems": [ 00:17:58.996 { 00:17:58.996 "subsystem": "iobuf", 00:17:58.996 "config": [ 00:17:58.996 { 00:17:58.996 "method": "iobuf_set_options", 00:17:58.996 "params": { 00:17:58.996 "small_pool_count": 8192, 00:17:58.996 "large_pool_count": 1024, 00:17:58.996 "small_bufsize": 8192, 00:17:58.996 "large_bufsize": 135168 00:17:58.996 } 00:17:58.996 } 00:17:58.996 ] 00:17:58.996 }, 00:17:58.996 { 00:17:58.996 "subsystem": "sock", 00:17:58.996 "config": [ 00:17:58.996 { 00:17:58.996 "method": "sock_impl_set_options", 00:17:58.996 "params": { 00:17:58.996 "impl_name": "posix", 00:17:58.996 "recv_buf_size": 2097152, 00:17:58.996 "send_buf_size": 2097152, 00:17:58.996 "enable_recv_pipe": true, 00:17:58.996 "enable_quickack": false, 00:17:58.996 "enable_placement_id": 0, 00:17:58.996 "enable_zerocopy_send_server": true, 00:17:58.996 "enable_zerocopy_send_client": false, 00:17:58.996 "zerocopy_threshold": 0, 00:17:58.996 "tls_version": 0, 00:17:58.997 "enable_ktls": false 00:17:58.997 } 00:17:58.997 }, 00:17:58.997 { 00:17:58.997 "method": "sock_impl_set_options", 00:17:58.997 "params": { 00:17:58.997 "impl_name": "ssl", 00:17:58.997 "recv_buf_size": 4096, 00:17:58.997 "send_buf_size": 4096, 00:17:58.997 "enable_recv_pipe": true, 00:17:58.997 "enable_quickack": false, 00:17:58.997 "enable_placement_id": 0, 00:17:58.997 "enable_zerocopy_send_server": true, 00:17:58.997 "enable_zerocopy_send_client": false, 00:17:58.997 "zerocopy_threshold": 0, 00:17:58.997 "tls_version": 0, 00:17:58.997 "enable_ktls": false 00:17:58.997 } 00:17:58.997 } 00:17:58.997 ] 00:17:58.997 }, 00:17:58.997 { 00:17:58.997 "subsystem": "vmd", 00:17:58.997 "config": [] 00:17:58.997 }, 00:17:58.997 { 00:17:58.997 "subsystem": "accel", 00:17:58.997 "config": [ 00:17:58.997 { 00:17:58.997 "method": "accel_set_options", 00:17:58.997 "params": { 00:17:58.997 "small_cache_size": 128, 00:17:58.997 "large_cache_size": 16, 00:17:58.997 "task_count": 2048, 00:17:58.997 "sequence_count": 2048, 00:17:58.997 "buf_count": 2048 00:17:58.997 } 00:17:58.997 } 00:17:58.997 ] 00:17:58.997 }, 00:17:58.997 { 00:17:58.997 "subsystem": "bdev", 00:17:58.997 "config": [ 00:17:58.997 { 00:17:58.997 "method": "bdev_set_options", 00:17:58.997 "params": { 00:17:58.997 "bdev_io_pool_size": 65535, 00:17:58.997 "bdev_io_cache_size": 256, 00:17:58.997 "bdev_auto_examine": true, 00:17:58.997 "iobuf_small_cache_size": 128, 00:17:58.997 "iobuf_large_cache_size": 16 00:17:58.997 } 00:17:58.997 }, 00:17:58.997 { 00:17:58.997 "method": "bdev_raid_set_options", 00:17:58.997 "params": { 00:17:58.997 "process_window_size_kb": 1024 00:17:58.997 } 00:17:58.997 }, 00:17:58.997 { 00:17:58.997 "method": "bdev_iscsi_set_options", 00:17:58.997 "params": { 00:17:58.997 "timeout_sec": 30 00:17:58.997 } 00:17:58.997 }, 00:17:58.997 { 00:17:58.997 "method": "bdev_nvme_set_options", 00:17:58.997 "params": { 00:17:58.997 "action_on_timeout": "none", 00:17:58.997 "timeout_us": 0, 00:17:58.997 "timeout_admin_us": 0, 00:17:58.997 "keep_alive_timeout_ms": 10000, 00:17:58.997 "transport_retry_count": 4, 00:17:58.997 "arbitration_burst": 0, 00:17:58.997 "low_priority_weight": 0, 00:17:58.997 "medium_priority_weight": 0, 00:17:58.997 "high_priority_weight": 0, 00:17:58.997 "nvme_adminq_poll_period_us": 10000, 00:17:58.997 "nvme_ioq_poll_period_us": 0, 00:17:58.997 "io_queue_requests": 512, 00:17:58.997 "delay_cmd_submit": true, 00:17:58.997 "bdev_retry_count": 3, 00:17:58.997 "transport_ack_timeout": 0, 00:17:58.997 "ctrlr_loss_timeout_sec": 0, 00:17:58.997 "reconnect_delay_sec": 0, 00:17:58.997 "fast_io_fail_timeout_sec": 0, 00:17:58.997 "generate_uuids": false, 00:17:58.997 "transport_tos": 0, 00:17:58.997 "io_path_stat": false, 00:17:58.997 "allow_accel_sequence": false 00:17:58.997 } 00:17:58.997 }, 00:17:58.997 { 00:17:58.997 "method": "bdev_nvme_attach_controller", 00:17:58.997 "params": { 00:17:58.997 "name": "TLSTEST", 00:17:58.997 "trtype": "TCP", 00:17:58.997 "adrfam": "IPv4", 00:17:58.997 "traddr": "10.0.0.2", 00:17:58.997 "trsvcid": "4420", 00:17:58.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.997 "prchk_reftag": false, 00:17:58.997 "prchk_guard": false, 00:17:58.997 "ctrlr_loss_timeout_sec": 0, 00:17:58.997 "reconnect_delay_sec": 0, 00:17:58.997 "fast_io_fail_timeout_sec": 0, 00:17:58.997 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:58.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.997 "hdgst":Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.997 false, 00:17:58.997 "ddgst": false 00:17:58.997 } 00:17:58.997 }, 00:17:58.997 { 00:17:58.997 "method": "bdev_nvme_set_hotplug", 00:17:58.997 "params": { 00:17:58.997 "period_us": 100000, 00:17:58.997 "enable": false 00:17:58.997 } 00:17:58.997 }, 00:17:58.997 { 00:17:58.997 "method": "bdev_wait_for_examine" 00:17:58.997 } 00:17:58.997 ] 00:17:58.997 }, 00:17:58.997 { 00:17:58.997 "subsystem": "nbd", 00:17:58.997 "config": [] 00:17:58.997 } 00:17:58.997 ] 00:17:58.997 }' 00:17:58.997 06:55:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:58.997 06:55:13 -- common/autotest_common.sh@10 -- # set +x 00:17:59.255 [2024-05-15 06:55:13.231420] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:17:59.255 [2024-05-15 06:55:13.231514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524103 ] 00:17:59.255 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.255 [2024-05-15 06:55:13.300481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.255 [2024-05-15 06:55:13.407189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:59.512 [2024-05-15 06:55:13.558786] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:00.077 06:55:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:00.077 06:55:14 -- common/autotest_common.sh@852 -- # return 0 00:18:00.078 06:55:14 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:00.335 Running I/O for 10 seconds... 00:18:10.297 00:18:10.297 Latency(us) 00:18:10.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.297 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:10.297 Verification LBA range: start 0x0 length 0x2000 00:18:10.297 TLSTESTn1 : 10.04 1368.78 5.35 0.00 0.00 93366.25 11699.39 89323.14 00:18:10.297 =================================================================================================================== 00:18:10.297 Total : 1368.78 5.35 0.00 0.00 93366.25 11699.39 89323.14 00:18:10.297 0 00:18:10.297 06:55:24 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.297 06:55:24 -- target/tls.sh@223 -- # killprocess 524103 00:18:10.297 06:55:24 -- common/autotest_common.sh@926 -- # '[' -z 524103 ']' 00:18:10.297 06:55:24 -- common/autotest_common.sh@930 -- # kill -0 524103 00:18:10.297 06:55:24 -- common/autotest_common.sh@931 -- # uname 00:18:10.297 06:55:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:10.297 06:55:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 524103 00:18:10.297 06:55:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:10.297 06:55:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:10.297 06:55:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 524103' 00:18:10.297 killing process with pid 524103 00:18:10.297 06:55:24 -- common/autotest_common.sh@945 -- # kill 524103 00:18:10.297 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.297 00:18:10.297 Latency(us) 00:18:10.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.297 =================================================================================================================== 00:18:10.297 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.297 06:55:24 -- common/autotest_common.sh@950 -- # wait 524103 00:18:10.555 06:55:24 -- target/tls.sh@224 -- # killprocess 523945 00:18:10.555 06:55:24 -- common/autotest_common.sh@926 -- # '[' -z 523945 ']' 00:18:10.555 06:55:24 -- common/autotest_common.sh@930 -- # kill -0 523945 00:18:10.555 06:55:24 -- common/autotest_common.sh@931 -- # uname 00:18:10.555 06:55:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:10.555 06:55:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 523945 00:18:10.555 06:55:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:10.555 06:55:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:10.555 06:55:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 523945' 00:18:10.555 killing process with pid 523945 00:18:10.555 06:55:24 -- common/autotest_common.sh@945 -- # kill 523945 00:18:10.555 06:55:24 -- common/autotest_common.sh@950 -- # wait 523945 00:18:10.813 06:55:24 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:18:10.813 06:55:24 -- target/tls.sh@227 -- # cleanup 00:18:10.813 06:55:24 -- target/tls.sh@15 -- # process_shm --id 0 00:18:10.813 06:55:24 -- common/autotest_common.sh@796 -- # type=--id 00:18:10.813 06:55:24 -- common/autotest_common.sh@797 -- # id=0 00:18:10.813 06:55:24 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:10.813 06:55:24 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:10.813 06:55:25 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:10.813 06:55:25 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:10.813 06:55:25 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:10.813 06:55:25 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:10.813 nvmf_trace.0 00:18:10.813 06:55:25 -- common/autotest_common.sh@811 -- # return 0 00:18:10.813 06:55:25 -- target/tls.sh@16 -- # killprocess 524103 00:18:10.813 06:55:25 -- common/autotest_common.sh@926 -- # '[' -z 524103 ']' 00:18:10.813 06:55:25 -- common/autotest_common.sh@930 -- # kill -0 524103 00:18:10.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (524103) - No such process 00:18:10.813 06:55:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 524103 is not found' 00:18:10.813 Process with pid 524103 is not found 00:18:10.813 06:55:25 -- target/tls.sh@17 -- # nvmftestfini 00:18:10.813 06:55:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:10.813 06:55:25 -- nvmf/common.sh@116 -- # sync 00:18:10.813 06:55:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:10.813 06:55:25 -- nvmf/common.sh@119 -- # set +e 00:18:10.813 06:55:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:10.813 06:55:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:11.073 rmmod nvme_tcp 00:18:11.073 rmmod nvme_fabrics 00:18:11.073 rmmod nvme_keyring 00:18:11.073 06:55:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:11.073 06:55:25 -- nvmf/common.sh@123 -- # set -e 00:18:11.073 06:55:25 -- nvmf/common.sh@124 -- # return 0 00:18:11.073 06:55:25 -- nvmf/common.sh@477 -- # '[' -n 523945 ']' 00:18:11.073 06:55:25 -- nvmf/common.sh@478 -- # killprocess 523945 00:18:11.073 06:55:25 -- common/autotest_common.sh@926 -- # '[' -z 523945 ']' 00:18:11.073 06:55:25 -- common/autotest_common.sh@930 -- # kill -0 523945 00:18:11.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (523945) - No such process 00:18:11.073 06:55:25 -- common/autotest_common.sh@953 -- # echo 'Process with pid 523945 is not found' 00:18:11.073 Process with pid 523945 is not found 00:18:11.073 06:55:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:11.073 06:55:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:11.073 06:55:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:11.073 06:55:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.073 06:55:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:11.073 06:55:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.073 06:55:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.073 06:55:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.977 06:55:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:12.977 06:55:27 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:12.977 00:18:12.977 real 1m15.979s 00:18:12.977 user 1m59.702s 00:18:12.977 sys 0m25.280s 00:18:12.977 06:55:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:12.977 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:18:12.977 ************************************ 00:18:12.977 END TEST nvmf_tls 00:18:12.977 ************************************ 00:18:12.977 06:55:27 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:12.977 06:55:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:12.977 06:55:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:12.977 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:18:12.977 ************************************ 00:18:12.977 START TEST nvmf_fips 00:18:12.977 ************************************ 00:18:12.977 06:55:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:12.977 * Looking for test storage... 00:18:13.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:13.236 06:55:27 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:13.236 06:55:27 -- nvmf/common.sh@7 -- # uname -s 00:18:13.236 06:55:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.236 06:55:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.236 06:55:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.236 06:55:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.236 06:55:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.236 06:55:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.236 06:55:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.236 06:55:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.236 06:55:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.236 06:55:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.236 06:55:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.236 06:55:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:13.236 06:55:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.236 06:55:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.236 06:55:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:13.236 06:55:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:13.236 06:55:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.236 06:55:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.236 06:55:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.236 06:55:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.236 06:55:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.236 06:55:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.236 06:55:27 -- paths/export.sh@5 -- # export PATH 00:18:13.236 06:55:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.236 06:55:27 -- nvmf/common.sh@46 -- # : 0 00:18:13.236 06:55:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:13.236 06:55:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:13.236 06:55:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:13.236 06:55:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.236 06:55:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.236 06:55:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:13.236 06:55:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:13.236 06:55:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:13.236 06:55:27 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:13.236 06:55:27 -- fips/fips.sh@89 -- # check_openssl_version 00:18:13.236 06:55:27 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:13.236 06:55:27 -- fips/fips.sh@85 -- # openssl version 00:18:13.236 06:55:27 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:13.236 06:55:27 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:13.236 06:55:27 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:13.236 06:55:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:13.236 06:55:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:13.236 06:55:27 -- scripts/common.sh@335 -- # IFS=.-: 00:18:13.236 06:55:27 -- scripts/common.sh@335 -- # read -ra ver1 00:18:13.236 06:55:27 -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.236 06:55:27 -- scripts/common.sh@336 -- # read -ra ver2 00:18:13.236 06:55:27 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:13.236 06:55:27 -- scripts/common.sh@339 -- # ver1_l=3 00:18:13.236 06:55:27 -- scripts/common.sh@340 -- # ver2_l=3 00:18:13.236 06:55:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:13.236 06:55:27 -- scripts/common.sh@343 -- # case "$op" in 00:18:13.236 06:55:27 -- scripts/common.sh@347 -- # : 1 00:18:13.236 06:55:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:13.236 06:55:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.236 06:55:27 -- scripts/common.sh@364 -- # decimal 3 00:18:13.236 06:55:27 -- scripts/common.sh@352 -- # local d=3 00:18:13.236 06:55:27 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:13.236 06:55:27 -- scripts/common.sh@354 -- # echo 3 00:18:13.236 06:55:27 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:13.236 06:55:27 -- scripts/common.sh@365 -- # decimal 3 00:18:13.236 06:55:27 -- scripts/common.sh@352 -- # local d=3 00:18:13.236 06:55:27 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:13.237 06:55:27 -- scripts/common.sh@354 -- # echo 3 00:18:13.237 06:55:27 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:13.237 06:55:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:13.237 06:55:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:13.237 06:55:27 -- scripts/common.sh@363 -- # (( v++ )) 00:18:13.237 06:55:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.237 06:55:27 -- scripts/common.sh@364 -- # decimal 0 00:18:13.237 06:55:27 -- scripts/common.sh@352 -- # local d=0 00:18:13.237 06:55:27 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:13.237 06:55:27 -- scripts/common.sh@354 -- # echo 0 00:18:13.237 06:55:27 -- scripts/common.sh@364 -- # ver1[v]=0 00:18:13.237 06:55:27 -- scripts/common.sh@365 -- # decimal 0 00:18:13.237 06:55:27 -- scripts/common.sh@352 -- # local d=0 00:18:13.237 06:55:27 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:13.237 06:55:27 -- scripts/common.sh@354 -- # echo 0 00:18:13.237 06:55:27 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:13.237 06:55:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:13.237 06:55:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:13.237 06:55:27 -- scripts/common.sh@363 -- # (( v++ )) 00:18:13.237 06:55:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.237 06:55:27 -- scripts/common.sh@364 -- # decimal 9 00:18:13.237 06:55:27 -- scripts/common.sh@352 -- # local d=9 00:18:13.237 06:55:27 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:13.237 06:55:27 -- scripts/common.sh@354 -- # echo 9 00:18:13.237 06:55:27 -- scripts/common.sh@364 -- # ver1[v]=9 00:18:13.237 06:55:27 -- scripts/common.sh@365 -- # decimal 0 00:18:13.237 06:55:27 -- scripts/common.sh@352 -- # local d=0 00:18:13.237 06:55:27 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:13.237 06:55:27 -- scripts/common.sh@354 -- # echo 0 00:18:13.237 06:55:27 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:13.237 06:55:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:13.237 06:55:27 -- scripts/common.sh@366 -- # return 0 00:18:13.237 06:55:27 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:13.237 06:55:27 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:13.237 06:55:27 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:13.237 06:55:27 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:13.237 06:55:27 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:13.237 06:55:27 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:13.237 06:55:27 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:13.237 06:55:27 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:18:13.237 06:55:27 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:18:13.237 06:55:27 -- fips/fips.sh@114 -- # build_openssl_config 00:18:13.237 06:55:27 -- fips/fips.sh@37 -- # cat 00:18:13.237 06:55:27 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:13.237 06:55:27 -- fips/fips.sh@58 -- # cat - 00:18:13.237 06:55:27 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:13.237 06:55:27 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:13.237 06:55:27 -- fips/fips.sh@117 -- # mapfile -t providers 00:18:13.237 06:55:27 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:18:13.237 06:55:27 -- fips/fips.sh@117 -- # openssl list -providers 00:18:13.237 06:55:27 -- fips/fips.sh@117 -- # grep name 00:18:13.237 06:55:27 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:13.237 06:55:27 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:13.237 06:55:27 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:13.237 06:55:27 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:13.237 06:55:27 -- fips/fips.sh@128 -- # : 00:18:13.237 06:55:27 -- common/autotest_common.sh@640 -- # local es=0 00:18:13.237 06:55:27 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:13.237 06:55:27 -- common/autotest_common.sh@628 -- # local arg=openssl 00:18:13.237 06:55:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:13.237 06:55:27 -- common/autotest_common.sh@632 -- # type -t openssl 00:18:13.237 06:55:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:13.237 06:55:27 -- common/autotest_common.sh@634 -- # type -P openssl 00:18:13.237 06:55:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:13.237 06:55:27 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:18:13.237 06:55:27 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:18:13.237 06:55:27 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:18:13.237 Error setting digest 00:18:13.237 00E2C982617F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:13.237 00E2C982617F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:13.237 06:55:27 -- common/autotest_common.sh@643 -- # es=1 00:18:13.237 06:55:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:13.237 06:55:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:13.237 06:55:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:13.237 06:55:27 -- fips/fips.sh@131 -- # nvmftestinit 00:18:13.237 06:55:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:13.237 06:55:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.237 06:55:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:13.237 06:55:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:13.237 06:55:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:13.237 06:55:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.237 06:55:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.237 06:55:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.237 06:55:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:13.237 06:55:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:13.237 06:55:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:13.237 06:55:27 -- common/autotest_common.sh@10 -- # set +x 00:18:15.766 06:55:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:15.766 06:55:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:15.766 06:55:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:15.766 06:55:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:15.766 06:55:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:15.766 06:55:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:15.766 06:55:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:15.766 06:55:29 -- nvmf/common.sh@294 -- # net_devs=() 00:18:15.766 06:55:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:15.766 06:55:29 -- nvmf/common.sh@295 -- # e810=() 00:18:15.766 06:55:29 -- nvmf/common.sh@295 -- # local -ga e810 00:18:15.766 06:55:29 -- nvmf/common.sh@296 -- # x722=() 00:18:15.766 06:55:29 -- nvmf/common.sh@296 -- # local -ga x722 00:18:15.766 06:55:29 -- nvmf/common.sh@297 -- # mlx=() 00:18:15.766 06:55:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:15.766 06:55:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.766 06:55:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.766 06:55:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.766 06:55:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.766 06:55:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.766 06:55:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.766 06:55:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.766 06:55:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.766 06:55:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.766 06:55:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.766 06:55:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.766 06:55:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:15.766 06:55:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:15.766 06:55:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:15.766 06:55:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:15.766 06:55:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:15.766 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:15.766 06:55:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:15.766 06:55:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:15.766 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:15.766 06:55:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:15.766 06:55:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:15.766 06:55:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.766 06:55:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:15.766 06:55:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.766 06:55:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:15.766 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:15.766 06:55:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.766 06:55:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:15.766 06:55:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.766 06:55:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:15.766 06:55:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.766 06:55:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:15.766 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:15.766 06:55:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.766 06:55:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:15.766 06:55:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:15.766 06:55:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:15.766 06:55:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.766 06:55:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.766 06:55:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.766 06:55:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:15.766 06:55:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.766 06:55:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.766 06:55:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:15.766 06:55:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.766 06:55:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.766 06:55:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:15.766 06:55:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:15.766 06:55:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.766 06:55:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.766 06:55:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.766 06:55:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.766 06:55:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:15.766 06:55:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.766 06:55:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.766 06:55:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:15.766 06:55:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:15.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:18:15.766 00:18:15.766 --- 10.0.0.2 ping statistics --- 00:18:15.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.766 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:18:15.766 06:55:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:15.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:18:15.766 00:18:15.766 --- 10.0.0.1 ping statistics --- 00:18:15.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.766 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:18:15.766 06:55:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.766 06:55:29 -- nvmf/common.sh@410 -- # return 0 00:18:15.766 06:55:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:15.766 06:55:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.766 06:55:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:15.766 06:55:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.766 06:55:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:15.766 06:55:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:15.766 06:55:29 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:15.766 06:55:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:15.766 06:55:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:15.766 06:55:29 -- common/autotest_common.sh@10 -- # set +x 00:18:15.766 06:55:29 -- nvmf/common.sh@469 -- # nvmfpid=527856 00:18:15.766 06:55:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.766 06:55:29 -- nvmf/common.sh@470 -- # waitforlisten 527856 00:18:15.766 06:55:29 -- common/autotest_common.sh@819 -- # '[' -z 527856 ']' 00:18:15.766 06:55:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.766 06:55:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:15.766 06:55:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.766 06:55:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:15.766 06:55:29 -- common/autotest_common.sh@10 -- # set +x 00:18:15.766 [2024-05-15 06:55:29.952137] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:15.766 [2024-05-15 06:55:29.952220] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.766 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.024 [2024-05-15 06:55:30.034275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.024 [2024-05-15 06:55:30.149538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:16.024 [2024-05-15 06:55:30.149714] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.024 [2024-05-15 06:55:30.149731] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.024 [2024-05-15 06:55:30.149744] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.024 [2024-05-15 06:55:30.149792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.984 06:55:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:16.984 06:55:30 -- common/autotest_common.sh@852 -- # return 0 00:18:16.984 06:55:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:16.984 06:55:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:16.984 06:55:30 -- common/autotest_common.sh@10 -- # set +x 00:18:16.984 06:55:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.984 06:55:30 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:16.984 06:55:30 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:16.984 06:55:30 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:16.984 06:55:30 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:16.984 06:55:30 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:16.984 06:55:30 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:16.984 06:55:30 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:16.984 06:55:30 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:16.984 [2024-05-15 06:55:31.112641] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.984 [2024-05-15 06:55:31.128621] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.984 [2024-05-15 06:55:31.128858] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.984 malloc0 00:18:16.984 06:55:31 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.984 06:55:31 -- fips/fips.sh@148 -- # bdevperf_pid=528019 00:18:16.984 06:55:31 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.984 06:55:31 -- fips/fips.sh@149 -- # waitforlisten 528019 /var/tmp/bdevperf.sock 00:18:16.984 06:55:31 -- common/autotest_common.sh@819 -- # '[' -z 528019 ']' 00:18:16.984 06:55:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.984 06:55:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:16.984 06:55:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.984 06:55:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:16.984 06:55:31 -- common/autotest_common.sh@10 -- # set +x 00:18:17.243 [2024-05-15 06:55:31.248665] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:17.243 [2024-05-15 06:55:31.248748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528019 ] 00:18:17.243 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.243 [2024-05-15 06:55:31.316447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.243 [2024-05-15 06:55:31.419814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.176 06:55:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:18.176 06:55:32 -- common/autotest_common.sh@852 -- # return 0 00:18:18.176 06:55:32 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:18.435 [2024-05-15 06:55:32.427854] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.435 TLSTESTn1 00:18:18.435 06:55:32 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:18.435 Running I/O for 10 seconds... 00:18:30.621 00:18:30.621 Latency(us) 00:18:30.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.621 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.621 Verification LBA range: start 0x0 length 0x2000 00:18:30.621 TLSTESTn1 : 10.05 1381.56 5.40 0.00 0.00 92489.26 11553.75 114178.28 00:18:30.621 =================================================================================================================== 00:18:30.621 Total : 1381.56 5.40 0.00 0.00 92489.26 11553.75 114178.28 00:18:30.621 0 00:18:30.621 06:55:42 -- fips/fips.sh@1 -- # cleanup 00:18:30.621 06:55:42 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:30.621 06:55:42 -- common/autotest_common.sh@796 -- # type=--id 00:18:30.621 06:55:42 -- common/autotest_common.sh@797 -- # id=0 00:18:30.621 06:55:42 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:30.621 06:55:42 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:30.621 06:55:42 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:30.621 06:55:42 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:30.621 06:55:42 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:30.621 06:55:42 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:30.621 nvmf_trace.0 00:18:30.621 06:55:42 -- common/autotest_common.sh@811 -- # return 0 00:18:30.621 06:55:42 -- fips/fips.sh@16 -- # killprocess 528019 00:18:30.621 06:55:42 -- common/autotest_common.sh@926 -- # '[' -z 528019 ']' 00:18:30.621 06:55:42 -- common/autotest_common.sh@930 -- # kill -0 528019 00:18:30.621 06:55:42 -- common/autotest_common.sh@931 -- # uname 00:18:30.621 06:55:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:30.621 06:55:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 528019 00:18:30.621 06:55:42 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:30.621 06:55:42 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:30.621 06:55:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 528019' 00:18:30.621 killing process with pid 528019 00:18:30.621 06:55:42 -- common/autotest_common.sh@945 -- # kill 528019 00:18:30.621 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.621 00:18:30.621 Latency(us) 00:18:30.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.621 =================================================================================================================== 00:18:30.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.621 06:55:42 -- common/autotest_common.sh@950 -- # wait 528019 00:18:30.621 06:55:43 -- fips/fips.sh@17 -- # nvmftestfini 00:18:30.621 06:55:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:30.621 06:55:43 -- nvmf/common.sh@116 -- # sync 00:18:30.621 06:55:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:30.621 06:55:43 -- nvmf/common.sh@119 -- # set +e 00:18:30.621 06:55:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:30.621 06:55:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:30.621 rmmod nvme_tcp 00:18:30.621 rmmod nvme_fabrics 00:18:30.621 rmmod nvme_keyring 00:18:30.621 06:55:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:30.621 06:55:43 -- nvmf/common.sh@123 -- # set -e 00:18:30.621 06:55:43 -- nvmf/common.sh@124 -- # return 0 00:18:30.621 06:55:43 -- nvmf/common.sh@477 -- # '[' -n 527856 ']' 00:18:30.621 06:55:43 -- nvmf/common.sh@478 -- # killprocess 527856 00:18:30.621 06:55:43 -- common/autotest_common.sh@926 -- # '[' -z 527856 ']' 00:18:30.621 06:55:43 -- common/autotest_common.sh@930 -- # kill -0 527856 00:18:30.621 06:55:43 -- common/autotest_common.sh@931 -- # uname 00:18:30.621 06:55:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:30.621 06:55:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 527856 00:18:30.621 06:55:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:30.621 06:55:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:30.621 06:55:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 527856' 00:18:30.621 killing process with pid 527856 00:18:30.621 06:55:43 -- common/autotest_common.sh@945 -- # kill 527856 00:18:30.621 06:55:43 -- common/autotest_common.sh@950 -- # wait 527856 00:18:30.621 06:55:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:30.621 06:55:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:30.621 06:55:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:30.621 06:55:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.621 06:55:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:30.621 06:55:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.621 06:55:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.621 06:55:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.557 06:55:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:31.557 06:55:45 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:31.557 00:18:31.557 real 0m18.344s 00:18:31.557 user 0m22.619s 00:18:31.557 sys 0m7.096s 00:18:31.557 06:55:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.557 06:55:45 -- common/autotest_common.sh@10 -- # set +x 00:18:31.557 ************************************ 00:18:31.557 END TEST nvmf_fips 00:18:31.557 ************************************ 00:18:31.557 06:55:45 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:31.557 06:55:45 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:31.557 06:55:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:31.557 06:55:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:31.557 06:55:45 -- common/autotest_common.sh@10 -- # set +x 00:18:31.557 ************************************ 00:18:31.557 START TEST nvmf_fuzz 00:18:31.557 ************************************ 00:18:31.557 06:55:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:31.557 * Looking for test storage... 00:18:31.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.557 06:55:45 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.557 06:55:45 -- nvmf/common.sh@7 -- # uname -s 00:18:31.557 06:55:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.557 06:55:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.557 06:55:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.557 06:55:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.557 06:55:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.557 06:55:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.557 06:55:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.557 06:55:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.557 06:55:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.557 06:55:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.557 06:55:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.557 06:55:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.557 06:55:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.557 06:55:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.557 06:55:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.557 06:55:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.557 06:55:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.557 06:55:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.557 06:55:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.557 06:55:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.557 06:55:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.557 06:55:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.557 06:55:45 -- paths/export.sh@5 -- # export PATH 00:18:31.557 06:55:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.557 06:55:45 -- nvmf/common.sh@46 -- # : 0 00:18:31.557 06:55:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:31.557 06:55:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:31.557 06:55:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:31.557 06:55:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.557 06:55:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.557 06:55:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:31.557 06:55:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:31.557 06:55:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:31.557 06:55:45 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:31.557 06:55:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:31.557 06:55:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.557 06:55:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:31.557 06:55:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:31.557 06:55:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:31.557 06:55:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.557 06:55:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.557 06:55:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.557 06:55:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:31.557 06:55:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:31.557 06:55:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:31.557 06:55:45 -- common/autotest_common.sh@10 -- # set +x 00:18:34.088 06:55:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:34.088 06:55:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:34.088 06:55:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:34.088 06:55:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:34.088 06:55:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:34.088 06:55:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:34.088 06:55:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:34.088 06:55:48 -- nvmf/common.sh@294 -- # net_devs=() 00:18:34.088 06:55:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:34.088 06:55:48 -- nvmf/common.sh@295 -- # e810=() 00:18:34.088 06:55:48 -- nvmf/common.sh@295 -- # local -ga e810 00:18:34.088 06:55:48 -- nvmf/common.sh@296 -- # x722=() 00:18:34.088 06:55:48 -- nvmf/common.sh@296 -- # local -ga x722 00:18:34.088 06:55:48 -- nvmf/common.sh@297 -- # mlx=() 00:18:34.088 06:55:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:34.088 06:55:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.088 06:55:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.088 06:55:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.088 06:55:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.088 06:55:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.088 06:55:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.088 06:55:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.088 06:55:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.088 06:55:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.088 06:55:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.088 06:55:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.088 06:55:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:34.088 06:55:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:34.088 06:55:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:34.088 06:55:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:34.088 06:55:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:34.088 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:34.088 06:55:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:34.088 06:55:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:34.088 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:34.088 06:55:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:34.088 06:55:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:34.088 06:55:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.088 06:55:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:34.088 06:55:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.088 06:55:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:34.088 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:34.088 06:55:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.088 06:55:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:34.088 06:55:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.088 06:55:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:34.088 06:55:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.088 06:55:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:34.088 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:34.088 06:55:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.088 06:55:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:34.088 06:55:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:34.088 06:55:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:34.088 06:55:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.088 06:55:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.088 06:55:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:34.088 06:55:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:34.088 06:55:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:34.088 06:55:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:34.088 06:55:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:34.088 06:55:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:34.088 06:55:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.088 06:55:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:34.088 06:55:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:34.088 06:55:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:34.088 06:55:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:34.088 06:55:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:34.088 06:55:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:34.088 06:55:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:34.088 06:55:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:34.088 06:55:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:34.088 06:55:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:34.088 06:55:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:34.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:18:34.088 00:18:34.088 --- 10.0.0.2 ping statistics --- 00:18:34.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.088 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:18:34.088 06:55:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:34.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:18:34.088 00:18:34.088 --- 10.0.0.1 ping statistics --- 00:18:34.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.088 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:18:34.088 06:55:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.088 06:55:48 -- nvmf/common.sh@410 -- # return 0 00:18:34.088 06:55:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:34.088 06:55:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.088 06:55:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:34.088 06:55:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.088 06:55:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:34.088 06:55:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:34.088 06:55:48 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=531747 00:18:34.088 06:55:48 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:34.088 06:55:48 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:34.088 06:55:48 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 531747 00:18:34.088 06:55:48 -- common/autotest_common.sh@819 -- # '[' -z 531747 ']' 00:18:34.088 06:55:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.088 06:55:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:34.088 06:55:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.088 06:55:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:34.088 06:55:48 -- common/autotest_common.sh@10 -- # set +x 00:18:35.021 06:55:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:35.021 06:55:49 -- common/autotest_common.sh@852 -- # return 0 00:18:35.021 06:55:49 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.021 06:55:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.021 06:55:49 -- common/autotest_common.sh@10 -- # set +x 00:18:35.021 06:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.021 06:55:49 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:35.021 06:55:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.021 06:55:49 -- common/autotest_common.sh@10 -- # set +x 00:18:35.279 Malloc0 00:18:35.279 06:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.279 06:55:49 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:35.279 06:55:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.279 06:55:49 -- common/autotest_common.sh@10 -- # set +x 00:18:35.279 06:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.279 06:55:49 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:35.279 06:55:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.279 06:55:49 -- common/autotest_common.sh@10 -- # set +x 00:18:35.279 06:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.279 06:55:49 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.279 06:55:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:35.279 06:55:49 -- common/autotest_common.sh@10 -- # set +x 00:18:35.279 06:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:35.279 06:55:49 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:35.279 06:55:49 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:19:07.370 Fuzzing completed. Shutting down the fuzz application 00:19:07.370 00:19:07.370 Dumping successful admin opcodes: 00:19:07.370 8, 9, 10, 24, 00:19:07.370 Dumping successful io opcodes: 00:19:07.370 0, 9, 00:19:07.370 NS: 0x200003aeff00 I/O qp, Total commands completed: 456026, total successful commands: 2646, random_seed: 2764510400 00:19:07.370 NS: 0x200003aeff00 admin qp, Total commands completed: 56448, total successful commands: 448, random_seed: 2746291392 00:19:07.370 06:56:19 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:19:07.370 Fuzzing completed. Shutting down the fuzz application 00:19:07.370 00:19:07.370 Dumping successful admin opcodes: 00:19:07.370 24, 00:19:07.370 Dumping successful io opcodes: 00:19:07.370 00:19:07.370 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1000961122 00:19:07.370 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1001086471 00:19:07.370 06:56:21 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.370 06:56:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:07.370 06:56:21 -- common/autotest_common.sh@10 -- # set +x 00:19:07.370 06:56:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:07.370 06:56:21 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:07.370 06:56:21 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:19:07.370 06:56:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:07.370 06:56:21 -- nvmf/common.sh@116 -- # sync 00:19:07.370 06:56:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:07.370 06:56:21 -- nvmf/common.sh@119 -- # set +e 00:19:07.370 06:56:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:07.370 06:56:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:07.370 rmmod nvme_tcp 00:19:07.370 rmmod nvme_fabrics 00:19:07.370 rmmod nvme_keyring 00:19:07.370 06:56:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:07.370 06:56:21 -- nvmf/common.sh@123 -- # set -e 00:19:07.370 06:56:21 -- nvmf/common.sh@124 -- # return 0 00:19:07.370 06:56:21 -- nvmf/common.sh@477 -- # '[' -n 531747 ']' 00:19:07.370 06:56:21 -- nvmf/common.sh@478 -- # killprocess 531747 00:19:07.370 06:56:21 -- common/autotest_common.sh@926 -- # '[' -z 531747 ']' 00:19:07.370 06:56:21 -- common/autotest_common.sh@930 -- # kill -0 531747 00:19:07.370 06:56:21 -- common/autotest_common.sh@931 -- # uname 00:19:07.370 06:56:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:07.370 06:56:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 531747 00:19:07.628 06:56:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:07.628 06:56:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:07.628 06:56:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 531747' 00:19:07.628 killing process with pid 531747 00:19:07.628 06:56:21 -- common/autotest_common.sh@945 -- # kill 531747 00:19:07.628 06:56:21 -- common/autotest_common.sh@950 -- # wait 531747 00:19:07.887 06:56:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:07.887 06:56:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:07.887 06:56:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:07.887 06:56:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.887 06:56:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:07.887 06:56:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.887 06:56:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.887 06:56:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.792 06:56:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:09.792 06:56:23 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:19:09.792 00:19:09.792 real 0m38.421s 00:19:09.792 user 0m52.302s 00:19:09.792 sys 0m15.451s 00:19:09.792 06:56:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.792 06:56:23 -- common/autotest_common.sh@10 -- # set +x 00:19:09.792 ************************************ 00:19:09.792 END TEST nvmf_fuzz 00:19:09.792 ************************************ 00:19:09.792 06:56:23 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:19:09.792 06:56:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:09.792 06:56:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:09.792 06:56:23 -- common/autotest_common.sh@10 -- # set +x 00:19:09.792 ************************************ 00:19:09.792 START TEST nvmf_multiconnection 00:19:09.792 ************************************ 00:19:09.792 06:56:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:19:09.792 * Looking for test storage... 00:19:10.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.049 06:56:24 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.049 06:56:24 -- nvmf/common.sh@7 -- # uname -s 00:19:10.049 06:56:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.049 06:56:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.049 06:56:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.049 06:56:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.049 06:56:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.049 06:56:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.049 06:56:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.049 06:56:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.049 06:56:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.049 06:56:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.049 06:56:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.049 06:56:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.049 06:56:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.050 06:56:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.050 06:56:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.050 06:56:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.050 06:56:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.050 06:56:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.050 06:56:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.050 06:56:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.050 06:56:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.050 06:56:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.050 06:56:24 -- paths/export.sh@5 -- # export PATH 00:19:10.050 06:56:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.050 06:56:24 -- nvmf/common.sh@46 -- # : 0 00:19:10.050 06:56:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:10.050 06:56:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:10.050 06:56:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:10.050 06:56:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.050 06:56:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.050 06:56:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:10.050 06:56:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:10.050 06:56:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:10.050 06:56:24 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.050 06:56:24 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.050 06:56:24 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:19:10.050 06:56:24 -- target/multiconnection.sh@16 -- # nvmftestinit 00:19:10.050 06:56:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:10.050 06:56:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.050 06:56:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:10.050 06:56:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:10.050 06:56:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:10.050 06:56:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.050 06:56:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.050 06:56:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.050 06:56:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:10.050 06:56:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:10.050 06:56:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:10.050 06:56:24 -- common/autotest_common.sh@10 -- # set +x 00:19:12.578 06:56:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:12.578 06:56:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:12.578 06:56:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:12.578 06:56:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:12.578 06:56:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:12.578 06:56:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:12.578 06:56:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:12.578 06:56:26 -- nvmf/common.sh@294 -- # net_devs=() 00:19:12.578 06:56:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:12.578 06:56:26 -- nvmf/common.sh@295 -- # e810=() 00:19:12.578 06:56:26 -- nvmf/common.sh@295 -- # local -ga e810 00:19:12.578 06:56:26 -- nvmf/common.sh@296 -- # x722=() 00:19:12.578 06:56:26 -- nvmf/common.sh@296 -- # local -ga x722 00:19:12.578 06:56:26 -- nvmf/common.sh@297 -- # mlx=() 00:19:12.578 06:56:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:12.578 06:56:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.578 06:56:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.578 06:56:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.578 06:56:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.578 06:56:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.578 06:56:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.578 06:56:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.578 06:56:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.578 06:56:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.578 06:56:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.578 06:56:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.578 06:56:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:12.578 06:56:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:12.578 06:56:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:12.578 06:56:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:12.578 06:56:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:12.578 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:12.578 06:56:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:12.578 06:56:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:12.578 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:12.578 06:56:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:12.578 06:56:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:12.578 06:56:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:12.578 06:56:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.578 06:56:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:12.578 06:56:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.578 06:56:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:12.578 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:12.578 06:56:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.578 06:56:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:12.578 06:56:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.579 06:56:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:12.579 06:56:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.579 06:56:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:12.579 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:12.579 06:56:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.579 06:56:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:12.579 06:56:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:12.579 06:56:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:12.579 06:56:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:12.579 06:56:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:12.579 06:56:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.579 06:56:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.579 06:56:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.579 06:56:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:12.579 06:56:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.579 06:56:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.579 06:56:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:12.579 06:56:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.579 06:56:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.579 06:56:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:12.579 06:56:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:12.579 06:56:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.579 06:56:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.579 06:56:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.579 06:56:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.579 06:56:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:12.579 06:56:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.579 06:56:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.579 06:56:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.579 06:56:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:12.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:19:12.579 00:19:12.579 --- 10.0.0.2 ping statistics --- 00:19:12.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.579 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:12.579 06:56:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:19:12.579 00:19:12.579 --- 10.0.0.1 ping statistics --- 00:19:12.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.579 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:19:12.579 06:56:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.579 06:56:26 -- nvmf/common.sh@410 -- # return 0 00:19:12.579 06:56:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:12.579 06:56:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.579 06:56:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:12.579 06:56:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:12.579 06:56:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.579 06:56:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:12.579 06:56:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:12.579 06:56:26 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:19:12.579 06:56:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:12.579 06:56:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:12.579 06:56:26 -- common/autotest_common.sh@10 -- # set +x 00:19:12.579 06:56:26 -- nvmf/common.sh@469 -- # nvmfpid=538042 00:19:12.579 06:56:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:12.579 06:56:26 -- nvmf/common.sh@470 -- # waitforlisten 538042 00:19:12.579 06:56:26 -- common/autotest_common.sh@819 -- # '[' -z 538042 ']' 00:19:12.579 06:56:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.579 06:56:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:12.579 06:56:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.579 06:56:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:12.579 06:56:26 -- common/autotest_common.sh@10 -- # set +x 00:19:12.579 [2024-05-15 06:56:26.665988] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:12.579 [2024-05-15 06:56:26.666064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.579 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.579 [2024-05-15 06:56:26.743757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.837 [2024-05-15 06:56:26.855081] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:12.837 [2024-05-15 06:56:26.855236] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.837 [2024-05-15 06:56:26.855253] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.837 [2024-05-15 06:56:26.855265] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.837 [2024-05-15 06:56:26.855328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.837 [2024-05-15 06:56:26.855392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.837 [2024-05-15 06:56:26.855455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.837 [2024-05-15 06:56:26.855458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.402 06:56:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:13.402 06:56:27 -- common/autotest_common.sh@852 -- # return 0 00:19:13.402 06:56:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:13.402 06:56:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:13.402 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.402 06:56:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.402 06:56:27 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:13.402 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.402 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 [2024-05-15 06:56:27.640341] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@21 -- # seq 1 11 00:19:13.661 06:56:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.661 06:56:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 Malloc1 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 [2024-05-15 06:56:27.695212] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.661 06:56:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 Malloc2 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.661 06:56:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 Malloc3 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.661 06:56:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 Malloc4 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:19:13.661 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.661 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.661 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.661 06:56:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.662 06:56:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:19:13.662 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.662 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.662 Malloc5 00:19:13.662 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.662 06:56:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:19:13.662 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.662 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.662 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.662 06:56:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:19:13.662 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.662 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.662 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.662 06:56:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:19:13.662 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.662 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.662 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.662 06:56:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.662 06:56:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:19:13.662 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.662 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 Malloc6 00:19:13.921 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:19:13.921 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:19:13.921 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:19:13.921 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.921 06:56:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:19:13.921 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 Malloc7 00:19:13.921 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:27 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:19:13.921 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:27 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:19:13.921 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:27 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:19:13.921 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:27 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.921 06:56:27 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:19:13.921 06:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 Malloc8 00:19:13.921 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:19:13.921 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:19:13.921 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:19:13.921 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.921 06:56:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:19:13.921 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 Malloc9 00:19:13.921 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:19:13.921 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:19:13.921 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:19:13.921 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.921 06:56:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:19:13.921 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 Malloc10 00:19:13.921 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.921 06:56:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:19:13.921 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.921 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.921 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.922 06:56:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:19:13.922 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.922 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.922 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.922 06:56:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:19:13.922 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.922 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:13.922 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:13.922 06:56:28 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:13.922 06:56:28 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:19:13.922 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:13.922 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:14.180 Malloc11 00:19:14.180 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.180 06:56:28 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:19:14.180 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.180 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:14.180 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.180 06:56:28 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:19:14.180 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.180 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:14.180 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.180 06:56:28 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:19:14.180 06:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:14.180 06:56:28 -- common/autotest_common.sh@10 -- # set +x 00:19:14.180 06:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:14.180 06:56:28 -- target/multiconnection.sh@28 -- # seq 1 11 00:19:14.180 06:56:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:14.180 06:56:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:14.743 06:56:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:19:14.743 06:56:28 -- common/autotest_common.sh@1177 -- # local i=0 00:19:14.743 06:56:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:14.743 06:56:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:14.743 06:56:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:16.640 06:56:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:16.640 06:56:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:16.640 06:56:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:19:16.640 06:56:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:16.640 06:56:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:16.640 06:56:30 -- common/autotest_common.sh@1187 -- # return 0 00:19:16.640 06:56:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:16.640 06:56:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:19:17.205 06:56:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:19:17.205 06:56:31 -- common/autotest_common.sh@1177 -- # local i=0 00:19:17.205 06:56:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:17.205 06:56:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:17.205 06:56:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:19.730 06:56:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:19.730 06:56:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:19.730 06:56:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:19:19.730 06:56:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:19.730 06:56:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:19.730 06:56:33 -- common/autotest_common.sh@1187 -- # return 0 00:19:19.730 06:56:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:19.730 06:56:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:19:19.988 06:56:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:19:19.988 06:56:34 -- common/autotest_common.sh@1177 -- # local i=0 00:19:19.988 06:56:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:19.988 06:56:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:19.988 06:56:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:21.887 06:56:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:21.887 06:56:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:21.887 06:56:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:19:21.887 06:56:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:21.887 06:56:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:21.887 06:56:36 -- common/autotest_common.sh@1187 -- # return 0 00:19:21.887 06:56:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:21.887 06:56:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:19:22.819 06:56:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:19:22.819 06:56:36 -- common/autotest_common.sh@1177 -- # local i=0 00:19:22.819 06:56:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:22.819 06:56:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:22.819 06:56:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:24.752 06:56:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:24.752 06:56:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:24.752 06:56:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:19:24.752 06:56:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:24.752 06:56:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:24.752 06:56:38 -- common/autotest_common.sh@1187 -- # return 0 00:19:24.752 06:56:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:24.752 06:56:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:19:25.318 06:56:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:19:25.318 06:56:39 -- common/autotest_common.sh@1177 -- # local i=0 00:19:25.318 06:56:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:25.318 06:56:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:25.318 06:56:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:27.215 06:56:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:27.215 06:56:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:27.215 06:56:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:19:27.215 06:56:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:27.215 06:56:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:27.215 06:56:41 -- common/autotest_common.sh@1187 -- # return 0 00:19:27.215 06:56:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.215 06:56:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:19:28.146 06:56:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:19:28.146 06:56:42 -- common/autotest_common.sh@1177 -- # local i=0 00:19:28.146 06:56:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:28.146 06:56:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:28.146 06:56:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:30.041 06:56:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:30.041 06:56:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:30.041 06:56:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:19:30.041 06:56:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:30.041 06:56:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:30.042 06:56:44 -- common/autotest_common.sh@1187 -- # return 0 00:19:30.042 06:56:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:30.042 06:56:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:19:30.974 06:56:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:30.974 06:56:45 -- common/autotest_common.sh@1177 -- # local i=0 00:19:30.974 06:56:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:30.975 06:56:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:30.975 06:56:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:32.870 06:56:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:32.870 06:56:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:32.870 06:56:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:19:32.870 06:56:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:32.870 06:56:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:32.870 06:56:47 -- common/autotest_common.sh@1187 -- # return 0 00:19:32.870 06:56:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:32.870 06:56:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:19:33.803 06:56:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:33.803 06:56:47 -- common/autotest_common.sh@1177 -- # local i=0 00:19:33.803 06:56:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:33.803 06:56:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:33.803 06:56:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:35.698 06:56:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:35.698 06:56:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:35.698 06:56:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:19:35.698 06:56:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:35.698 06:56:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:35.698 06:56:49 -- common/autotest_common.sh@1187 -- # return 0 00:19:35.698 06:56:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:35.698 06:56:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:19:36.630 06:56:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:36.630 06:56:50 -- common/autotest_common.sh@1177 -- # local i=0 00:19:36.630 06:56:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:36.630 06:56:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:36.630 06:56:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:38.525 06:56:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:38.525 06:56:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:38.525 06:56:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:19:38.525 06:56:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:38.525 06:56:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:38.525 06:56:52 -- common/autotest_common.sh@1187 -- # return 0 00:19:38.525 06:56:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:38.525 06:56:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:39.476 06:56:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:39.476 06:56:53 -- common/autotest_common.sh@1177 -- # local i=0 00:19:39.476 06:56:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:39.476 06:56:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:39.476 06:56:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:41.380 06:56:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:41.380 06:56:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:41.380 06:56:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:19:41.380 06:56:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:41.380 06:56:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:41.380 06:56:55 -- common/autotest_common.sh@1187 -- # return 0 00:19:41.380 06:56:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:41.380 06:56:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:42.339 06:56:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:42.339 06:56:56 -- common/autotest_common.sh@1177 -- # local i=0 00:19:42.339 06:56:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:42.339 06:56:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:42.339 06:56:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:44.239 06:56:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:44.239 06:56:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:44.239 06:56:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:19:44.239 06:56:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:44.239 06:56:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:44.239 06:56:58 -- common/autotest_common.sh@1187 -- # return 0 00:19:44.239 06:56:58 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:44.239 [global] 00:19:44.239 thread=1 00:19:44.239 invalidate=1 00:19:44.239 rw=read 00:19:44.239 time_based=1 00:19:44.239 runtime=10 00:19:44.239 ioengine=libaio 00:19:44.239 direct=1 00:19:44.239 bs=262144 00:19:44.239 iodepth=64 00:19:44.239 norandommap=1 00:19:44.239 numjobs=1 00:19:44.239 00:19:44.239 [job0] 00:19:44.239 filename=/dev/nvme0n1 00:19:44.239 [job1] 00:19:44.239 filename=/dev/nvme10n1 00:19:44.239 [job2] 00:19:44.239 filename=/dev/nvme1n1 00:19:44.239 [job3] 00:19:44.239 filename=/dev/nvme2n1 00:19:44.239 [job4] 00:19:44.239 filename=/dev/nvme3n1 00:19:44.239 [job5] 00:19:44.239 filename=/dev/nvme4n1 00:19:44.239 [job6] 00:19:44.239 filename=/dev/nvme5n1 00:19:44.239 [job7] 00:19:44.239 filename=/dev/nvme6n1 00:19:44.239 [job8] 00:19:44.239 filename=/dev/nvme7n1 00:19:44.239 [job9] 00:19:44.239 filename=/dev/nvme8n1 00:19:44.239 [job10] 00:19:44.239 filename=/dev/nvme9n1 00:19:44.239 Could not set queue depth (nvme0n1) 00:19:44.239 Could not set queue depth (nvme10n1) 00:19:44.239 Could not set queue depth (nvme1n1) 00:19:44.239 Could not set queue depth (nvme2n1) 00:19:44.239 Could not set queue depth (nvme3n1) 00:19:44.239 Could not set queue depth (nvme4n1) 00:19:44.239 Could not set queue depth (nvme5n1) 00:19:44.239 Could not set queue depth (nvme6n1) 00:19:44.239 Could not set queue depth (nvme7n1) 00:19:44.239 Could not set queue depth (nvme8n1) 00:19:44.239 Could not set queue depth (nvme9n1) 00:19:44.497 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.497 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.497 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.497 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.497 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.497 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.497 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.497 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.497 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.497 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.497 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:44.497 fio-3.35 00:19:44.497 Starting 11 threads 00:19:56.706 00:19:56.706 job0: (groupid=0, jobs=1): err= 0: pid=542408: Wed May 15 06:57:09 2024 00:19:56.706 read: IOPS=477, BW=119MiB/s (125MB/s)(1212MiB/10148msec) 00:19:56.706 slat (usec): min=9, max=392485, avg=1061.78, stdev=10182.76 00:19:56.706 clat (msec): min=2, max=857, avg=132.76, stdev=108.63 00:19:56.706 lat (msec): min=2, max=879, avg=133.83, stdev=109.93 00:19:56.706 clat percentiles (msec): 00:19:56.706 | 1.00th=[ 7], 5.00th=[ 15], 10.00th=[ 22], 20.00th=[ 42], 00:19:56.706 | 30.00th=[ 67], 40.00th=[ 82], 50.00th=[ 104], 60.00th=[ 142], 00:19:56.706 | 70.00th=[ 174], 80.00th=[ 207], 90.00th=[ 251], 95.00th=[ 334], 00:19:56.706 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 852], 99.95th=[ 860], 00:19:56.706 | 99.99th=[ 860] 00:19:56.706 bw ( KiB/s): min=32256, max=218624, per=8.32%, avg=122516.85, stdev=56053.68, samples=20 00:19:56.706 iops : min= 126, max= 854, avg=478.55, stdev=219.00, samples=20 00:19:56.706 lat (msec) : 4=0.14%, 10=2.52%, 20=6.19%, 50=13.71%, 100=26.13% 00:19:56.706 lat (msec) : 250=41.37%, 500=9.34%, 750=0.31%, 1000=0.29% 00:19:56.706 cpu : usr=0.18%, sys=1.36%, ctx=1468, majf=0, minf=4097 00:19:56.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:56.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.706 issued rwts: total=4849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.706 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.706 job1: (groupid=0, jobs=1): err= 0: pid=542409: Wed May 15 06:57:09 2024 00:19:56.706 read: IOPS=584, BW=146MiB/s (153MB/s)(1471MiB/10074msec) 00:19:56.706 slat (usec): min=10, max=256567, avg=1227.68, stdev=8335.96 00:19:56.706 clat (msec): min=2, max=852, avg=108.26, stdev=127.70 00:19:56.706 lat (msec): min=2, max=852, avg=109.49, stdev=128.98 00:19:56.706 clat percentiles (msec): 00:19:56.706 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 35], 20.00th=[ 51], 00:19:56.706 | 30.00th=[ 55], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 81], 00:19:56.706 | 70.00th=[ 97], 80.00th=[ 117], 90.00th=[ 169], 95.00th=[ 464], 00:19:56.706 | 99.00th=[ 743], 99.50th=[ 793], 99.90th=[ 844], 99.95th=[ 852], 00:19:56.706 | 99.99th=[ 852] 00:19:56.706 bw ( KiB/s): min=13824, max=304542, per=10.11%, avg=148987.10, stdev=88312.88, samples=20 00:19:56.706 iops : min= 54, max= 1189, avg=581.95, stdev=344.92, samples=20 00:19:56.706 lat (msec) : 4=0.14%, 10=1.55%, 20=4.55%, 50=13.55%, 100=51.27% 00:19:56.706 lat (msec) : 250=21.36%, 500=4.11%, 750=2.84%, 1000=0.63% 00:19:56.706 cpu : usr=0.39%, sys=1.76%, ctx=1468, majf=0, minf=4097 00:19:56.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:56.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.706 issued rwts: total=5884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.706 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.706 job2: (groupid=0, jobs=1): err= 0: pid=542410: Wed May 15 06:57:09 2024 00:19:56.706 read: IOPS=505, BW=126MiB/s (133MB/s)(1271MiB/10052msec) 00:19:56.706 slat (usec): min=10, max=155633, avg=1061.75, stdev=5653.12 00:19:56.706 clat (msec): min=4, max=728, avg=125.36, stdev=79.03 00:19:56.706 lat (msec): min=4, max=728, avg=126.42, stdev=79.58 00:19:56.706 clat percentiles (msec): 00:19:56.706 | 1.00th=[ 14], 5.00th=[ 31], 10.00th=[ 48], 20.00th=[ 71], 00:19:56.706 | 30.00th=[ 88], 40.00th=[ 99], 50.00th=[ 115], 60.00th=[ 136], 00:19:56.706 | 70.00th=[ 153], 80.00th=[ 174], 90.00th=[ 199], 95.00th=[ 220], 00:19:56.706 | 99.00th=[ 506], 99.50th=[ 659], 99.90th=[ 726], 99.95th=[ 726], 00:19:56.706 | 99.99th=[ 726] 00:19:56.706 bw ( KiB/s): min=48224, max=203264, per=8.73%, avg=128556.55, stdev=44789.31, samples=20 00:19:56.706 iops : min= 188, max= 794, avg=502.15, stdev=175.00, samples=20 00:19:56.706 lat (msec) : 10=0.75%, 20=2.03%, 50=7.96%, 100=30.95%, 250=55.91% 00:19:56.706 lat (msec) : 500=1.38%, 750=1.02% 00:19:56.706 cpu : usr=0.23%, sys=1.60%, ctx=1384, majf=0, minf=4097 00:19:56.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:56.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.706 issued rwts: total=5085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.706 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.706 job3: (groupid=0, jobs=1): err= 0: pid=542411: Wed May 15 06:57:09 2024 00:19:56.706 read: IOPS=449, BW=112MiB/s (118MB/s)(1141MiB/10144msec) 00:19:56.706 slat (usec): min=10, max=201427, avg=1005.33, stdev=8598.15 00:19:56.706 clat (usec): min=1203, max=817534, avg=141183.60, stdev=129881.92 00:19:56.706 lat (usec): min=1260, max=817584, avg=142188.93, stdev=131017.70 00:19:56.706 clat percentiles (msec): 00:19:56.706 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 21], 20.00th=[ 36], 00:19:56.706 | 30.00th=[ 50], 40.00th=[ 82], 50.00th=[ 123], 60.00th=[ 159], 00:19:56.706 | 70.00th=[ 180], 80.00th=[ 207], 90.00th=[ 257], 95.00th=[ 430], 00:19:56.706 | 99.00th=[ 726], 99.50th=[ 760], 99.90th=[ 818], 99.95th=[ 818], 00:19:56.706 | 99.99th=[ 818] 00:19:56.706 bw ( KiB/s): min=29696, max=244736, per=7.82%, avg=115159.90, stdev=69467.48, samples=20 00:19:56.706 iops : min= 116, max= 956, avg=449.80, stdev=271.39, samples=20 00:19:56.706 lat (msec) : 2=0.04%, 4=0.37%, 10=1.93%, 20=6.90%, 50=21.02% 00:19:56.706 lat (msec) : 100=14.73%, 250=43.86%, 500=8.75%, 750=1.78%, 1000=0.61% 00:19:56.706 cpu : usr=0.24%, sys=1.31%, ctx=1361, majf=0, minf=4097 00:19:56.706 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:56.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.706 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.706 issued rwts: total=4562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.706 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.706 job4: (groupid=0, jobs=1): err= 0: pid=542412: Wed May 15 06:57:09 2024 00:19:56.706 read: IOPS=328, BW=82.0MiB/s (86.0MB/s)(831MiB/10136msec) 00:19:56.706 slat (usec): min=10, max=455577, avg=2106.91, stdev=11098.68 00:19:56.706 clat (msec): min=8, max=944, avg=192.84, stdev=151.57 00:19:56.706 lat (msec): min=11, max=944, avg=194.95, stdev=151.98 00:19:56.706 clat percentiles (msec): 00:19:56.706 | 1.00th=[ 29], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 85], 00:19:56.706 | 30.00th=[ 104], 40.00th=[ 128], 50.00th=[ 161], 60.00th=[ 180], 00:19:56.707 | 70.00th=[ 203], 80.00th=[ 232], 90.00th=[ 451], 95.00th=[ 535], 00:19:56.707 | 99.00th=[ 743], 99.50th=[ 802], 99.90th=[ 894], 99.95th=[ 911], 00:19:56.707 | 99.99th=[ 944] 00:19:56.707 bw ( KiB/s): min= 6144, max=208384, per=5.67%, avg=83485.95, stdev=50195.37, samples=20 00:19:56.707 iops : min= 24, max= 814, avg=326.05, stdev=196.03, samples=20 00:19:56.707 lat (msec) : 10=0.03%, 20=0.36%, 50=5.29%, 100=22.02%, 250=54.98% 00:19:56.707 lat (msec) : 500=10.32%, 750=6.29%, 1000=0.72% 00:19:56.707 cpu : usr=0.18%, sys=1.27%, ctx=861, majf=0, minf=4097 00:19:56.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:56.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.707 issued rwts: total=3325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.707 job5: (groupid=0, jobs=1): err= 0: pid=542413: Wed May 15 06:57:09 2024 00:19:56.707 read: IOPS=562, BW=141MiB/s (148MB/s)(1419MiB/10084msec) 00:19:56.707 slat (usec): min=9, max=174909, avg=1251.39, stdev=5056.05 00:19:56.707 clat (msec): min=2, max=663, avg=112.36, stdev=68.68 00:19:56.707 lat (msec): min=2, max=663, avg=113.61, stdev=69.13 00:19:56.707 clat percentiles (msec): 00:19:56.707 | 1.00th=[ 9], 5.00th=[ 38], 10.00th=[ 52], 20.00th=[ 64], 00:19:56.707 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 95], 60.00th=[ 114], 00:19:56.707 | 70.00th=[ 134], 80.00th=[ 159], 90.00th=[ 194], 95.00th=[ 211], 00:19:56.707 | 99.00th=[ 380], 99.50th=[ 523], 99.90th=[ 642], 99.95th=[ 642], 00:19:56.707 | 99.99th=[ 667] 00:19:56.707 bw ( KiB/s): min=57856, max=265728, per=9.75%, avg=143656.95, stdev=51475.40, samples=20 00:19:56.707 iops : min= 226, max= 1038, avg=561.15, stdev=201.08, samples=20 00:19:56.707 lat (msec) : 4=0.16%, 10=1.04%, 20=0.90%, 50=6.71%, 100=44.33% 00:19:56.707 lat (msec) : 250=44.59%, 500=1.62%, 750=0.65% 00:19:56.707 cpu : usr=0.32%, sys=1.98%, ctx=1420, majf=0, minf=4097 00:19:56.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:56.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.707 issued rwts: total=5676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.707 job6: (groupid=0, jobs=1): err= 0: pid=542414: Wed May 15 06:57:09 2024 00:19:56.707 read: IOPS=647, BW=162MiB/s (170MB/s)(1631MiB/10081msec) 00:19:56.707 slat (usec): min=9, max=128210, avg=1227.15, stdev=5374.37 00:19:56.707 clat (msec): min=2, max=505, avg=97.58, stdev=69.74 00:19:56.707 lat (msec): min=2, max=505, avg=98.81, stdev=70.37 00:19:56.707 clat percentiles (msec): 00:19:56.707 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 34], 20.00th=[ 44], 00:19:56.707 | 30.00th=[ 56], 40.00th=[ 66], 50.00th=[ 78], 60.00th=[ 94], 00:19:56.707 | 70.00th=[ 120], 80.00th=[ 153], 90.00th=[ 192], 95.00th=[ 215], 00:19:56.707 | 99.00th=[ 300], 99.50th=[ 464], 99.90th=[ 498], 99.95th=[ 506], 00:19:56.707 | 99.99th=[ 506] 00:19:56.707 bw ( KiB/s): min=73069, max=289792, per=11.23%, avg=165394.25, stdev=69198.74, samples=20 00:19:56.707 iops : min= 285, max= 1132, avg=646.05, stdev=270.34, samples=20 00:19:56.707 lat (msec) : 4=0.61%, 10=2.41%, 20=3.17%, 50=18.68%, 100=38.99% 00:19:56.707 lat (msec) : 250=34.41%, 500=1.66%, 750=0.06% 00:19:56.707 cpu : usr=0.29%, sys=2.17%, ctx=1558, majf=0, minf=4097 00:19:56.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:56.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.707 issued rwts: total=6524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.707 job7: (groupid=0, jobs=1): err= 0: pid=542415: Wed May 15 06:57:09 2024 00:19:56.707 read: IOPS=437, BW=109MiB/s (115MB/s)(1110MiB/10140msec) 00:19:56.707 slat (usec): min=9, max=426498, avg=1096.73, stdev=11560.50 00:19:56.707 clat (msec): min=2, max=1165, avg=145.00, stdev=127.44 00:19:56.707 lat (msec): min=2, max=1165, avg=146.10, stdev=128.63 00:19:56.707 clat percentiles (msec): 00:19:56.707 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 24], 20.00th=[ 50], 00:19:56.707 | 30.00th=[ 81], 40.00th=[ 106], 50.00th=[ 130], 60.00th=[ 157], 00:19:56.707 | 70.00th=[ 171], 80.00th=[ 192], 90.00th=[ 224], 95.00th=[ 414], 00:19:56.707 | 99.00th=[ 743], 99.50th=[ 802], 99.90th=[ 835], 99.95th=[ 835], 00:19:56.707 | 99.99th=[ 1167] 00:19:56.707 bw ( KiB/s): min=31744, max=276480, per=7.60%, avg=111987.20, stdev=59100.99, samples=20 00:19:56.707 iops : min= 124, max= 1080, avg=437.45, stdev=230.86, samples=20 00:19:56.707 lat (msec) : 4=0.23%, 10=1.44%, 20=5.39%, 50=13.43%, 100=17.37% 00:19:56.707 lat (msec) : 250=54.44%, 500=4.33%, 750=2.41%, 1000=0.95%, 2000=0.02% 00:19:56.707 cpu : usr=0.30%, sys=1.31%, ctx=1502, majf=0, minf=4097 00:19:56.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:56.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.707 issued rwts: total=4438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.707 job8: (groupid=0, jobs=1): err= 0: pid=542416: Wed May 15 06:57:09 2024 00:19:56.707 read: IOPS=943, BW=236MiB/s (247MB/s)(2370MiB/10051msec) 00:19:56.707 slat (usec): min=12, max=117151, avg=832.00, stdev=3045.38 00:19:56.707 clat (msec): min=3, max=645, avg=66.97, stdev=55.80 00:19:56.707 lat (msec): min=3, max=717, avg=67.80, stdev=56.21 00:19:56.707 clat percentiles (msec): 00:19:56.707 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 35], 20.00th=[ 41], 00:19:56.707 | 30.00th=[ 43], 40.00th=[ 45], 50.00th=[ 52], 60.00th=[ 61], 00:19:56.707 | 70.00th=[ 69], 80.00th=[ 82], 90.00th=[ 125], 95.00th=[ 171], 00:19:56.707 | 99.00th=[ 228], 99.50th=[ 284], 99.90th=[ 600], 99.95th=[ 642], 00:19:56.707 | 99.99th=[ 642] 00:19:56.707 bw ( KiB/s): min=91648, max=379392, per=16.36%, avg=240955.30, stdev=90562.48, samples=20 00:19:56.707 iops : min= 358, max= 1482, avg=941.20, stdev=353.72, samples=20 00:19:56.707 lat (msec) : 4=0.03%, 10=2.82%, 20=4.42%, 50=41.86%, 100=36.82% 00:19:56.707 lat (msec) : 250=13.43%, 500=0.13%, 750=0.50% 00:19:56.707 cpu : usr=0.54%, sys=3.19%, ctx=2163, majf=0, minf=3723 00:19:56.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:56.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.707 issued rwts: total=9479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.707 job9: (groupid=0, jobs=1): err= 0: pid=542417: Wed May 15 06:57:09 2024 00:19:56.707 read: IOPS=350, BW=87.6MiB/s (91.8MB/s)(889MiB/10148msec) 00:19:56.707 slat (usec): min=9, max=278142, avg=2011.87, stdev=12295.05 00:19:56.707 clat (msec): min=3, max=883, avg=180.57, stdev=164.98 00:19:56.707 lat (msec): min=3, max=897, avg=182.58, stdev=167.69 00:19:56.707 clat percentiles (msec): 00:19:56.707 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 28], 20.00th=[ 62], 00:19:56.707 | 30.00th=[ 82], 40.00th=[ 108], 50.00th=[ 146], 60.00th=[ 169], 00:19:56.707 | 70.00th=[ 190], 80.00th=[ 236], 90.00th=[ 439], 95.00th=[ 567], 00:19:56.707 | 99.00th=[ 768], 99.50th=[ 835], 99.90th=[ 885], 99.95th=[ 885], 00:19:56.707 | 99.99th=[ 885] 00:19:56.707 bw ( KiB/s): min=15872, max=193660, per=6.06%, avg=89338.05, stdev=60497.76, samples=20 00:19:56.707 iops : min= 62, max= 756, avg=348.95, stdev=236.27, samples=20 00:19:56.707 lat (msec) : 4=0.08%, 10=2.14%, 20=4.42%, 50=10.21%, 100=21.75% 00:19:56.707 lat (msec) : 250=42.94%, 500=11.90%, 750=5.49%, 1000=1.07% 00:19:56.707 cpu : usr=0.23%, sys=1.22%, ctx=1097, majf=0, minf=4097 00:19:56.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:56.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.707 issued rwts: total=3554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.707 job10: (groupid=0, jobs=1): err= 0: pid=542418: Wed May 15 06:57:09 2024 00:19:56.707 read: IOPS=497, BW=124MiB/s (130MB/s)(1254MiB/10085msec) 00:19:56.707 slat (usec): min=9, max=93271, avg=1408.85, stdev=4994.15 00:19:56.707 clat (msec): min=4, max=272, avg=127.16, stdev=56.90 00:19:56.707 lat (msec): min=4, max=272, avg=128.56, stdev=57.35 00:19:56.707 clat percentiles (msec): 00:19:56.707 | 1.00th=[ 24], 5.00th=[ 42], 10.00th=[ 57], 20.00th=[ 79], 00:19:56.707 | 30.00th=[ 91], 40.00th=[ 102], 50.00th=[ 120], 60.00th=[ 138], 00:19:56.707 | 70.00th=[ 161], 80.00th=[ 184], 90.00th=[ 209], 95.00th=[ 228], 00:19:56.707 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 266], 99.95th=[ 266], 00:19:56.707 | 99.99th=[ 275] 00:19:56.707 bw ( KiB/s): min=75776, max=200080, per=8.61%, avg=126803.90, stdev=32360.72, samples=20 00:19:56.707 iops : min= 296, max= 781, avg=495.25, stdev=126.34, samples=20 00:19:56.707 lat (msec) : 10=0.22%, 20=0.52%, 50=6.76%, 100=31.53%, 250=60.04% 00:19:56.707 lat (msec) : 500=0.94% 00:19:56.707 cpu : usr=0.21%, sys=1.90%, ctx=1325, majf=0, minf=4097 00:19:56.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:56.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:56.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:56.707 issued rwts: total=5017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:56.707 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:56.707 00:19:56.707 Run status group 0 (all jobs): 00:19:56.707 READ: bw=1439MiB/s (1508MB/s), 82.0MiB/s-236MiB/s (86.0MB/s-247MB/s), io=14.3GiB (15.3GB), run=10051-10148msec 00:19:56.707 00:19:56.707 Disk stats (read/write): 00:19:56.707 nvme0n1: ios=9522/0, merge=0/0, ticks=1236240/0, in_queue=1236240, util=97.25% 00:19:56.707 nvme10n1: ios=11543/0, merge=0/0, ticks=1241875/0, in_queue=1241875, util=97.47% 00:19:56.707 nvme1n1: ios=9934/0, merge=0/0, ticks=1244969/0, in_queue=1244969, util=97.71% 00:19:56.707 nvme2n1: ios=8984/0, merge=0/0, ticks=1231059/0, in_queue=1231059, util=97.88% 00:19:56.707 nvme3n1: ios=6473/0, merge=0/0, ticks=1224968/0, in_queue=1224968, util=97.94% 00:19:56.707 nvme4n1: ios=11181/0, merge=0/0, ticks=1237104/0, in_queue=1237104, util=98.27% 00:19:56.707 nvme5n1: ios=12836/0, merge=0/0, ticks=1231703/0, in_queue=1231703, util=98.42% 00:19:56.707 nvme6n1: ios=8651/0, merge=0/0, ticks=1239837/0, in_queue=1239837, util=98.52% 00:19:56.707 nvme7n1: ios=18741/0, merge=0/0, ticks=1238098/0, in_queue=1238098, util=98.93% 00:19:56.707 nvme8n1: ios=6925/0, merge=0/0, ticks=1217198/0, in_queue=1217198, util=99.10% 00:19:56.707 nvme9n1: ios=9853/0, merge=0/0, ticks=1237182/0, in_queue=1237182, util=99.20% 00:19:56.707 06:57:09 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:56.707 [global] 00:19:56.707 thread=1 00:19:56.707 invalidate=1 00:19:56.708 rw=randwrite 00:19:56.708 time_based=1 00:19:56.708 runtime=10 00:19:56.708 ioengine=libaio 00:19:56.708 direct=1 00:19:56.708 bs=262144 00:19:56.708 iodepth=64 00:19:56.708 norandommap=1 00:19:56.708 numjobs=1 00:19:56.708 00:19:56.708 [job0] 00:19:56.708 filename=/dev/nvme0n1 00:19:56.708 [job1] 00:19:56.708 filename=/dev/nvme10n1 00:19:56.708 [job2] 00:19:56.708 filename=/dev/nvme1n1 00:19:56.708 [job3] 00:19:56.708 filename=/dev/nvme2n1 00:19:56.708 [job4] 00:19:56.708 filename=/dev/nvme3n1 00:19:56.708 [job5] 00:19:56.708 filename=/dev/nvme4n1 00:19:56.708 [job6] 00:19:56.708 filename=/dev/nvme5n1 00:19:56.708 [job7] 00:19:56.708 filename=/dev/nvme6n1 00:19:56.708 [job8] 00:19:56.708 filename=/dev/nvme7n1 00:19:56.708 [job9] 00:19:56.708 filename=/dev/nvme8n1 00:19:56.708 [job10] 00:19:56.708 filename=/dev/nvme9n1 00:19:56.708 Could not set queue depth (nvme0n1) 00:19:56.708 Could not set queue depth (nvme10n1) 00:19:56.708 Could not set queue depth (nvme1n1) 00:19:56.708 Could not set queue depth (nvme2n1) 00:19:56.708 Could not set queue depth (nvme3n1) 00:19:56.708 Could not set queue depth (nvme4n1) 00:19:56.708 Could not set queue depth (nvme5n1) 00:19:56.708 Could not set queue depth (nvme6n1) 00:19:56.708 Could not set queue depth (nvme7n1) 00:19:56.708 Could not set queue depth (nvme8n1) 00:19:56.708 Could not set queue depth (nvme9n1) 00:19:56.708 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.708 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.708 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.708 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.708 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.708 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.708 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.708 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.708 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.708 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.708 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:56.708 fio-3.35 00:19:56.708 Starting 11 threads 00:20:06.682 00:20:06.682 job0: (groupid=0, jobs=1): err= 0: pid=544072: Wed May 15 06:57:20 2024 00:20:06.682 write: IOPS=515, BW=129MiB/s (135MB/s)(1299MiB/10074msec); 0 zone resets 00:20:06.682 slat (usec): min=18, max=28552, avg=1348.98, stdev=3162.75 00:20:06.682 clat (msec): min=5, max=656, avg=122.62, stdev=64.39 00:20:06.682 lat (msec): min=6, max=657, avg=123.97, stdev=64.70 00:20:06.682 clat percentiles (msec): 00:20:06.682 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 38], 20.00th=[ 79], 00:20:06.682 | 30.00th=[ 100], 40.00th=[ 115], 50.00th=[ 136], 60.00th=[ 144], 00:20:06.682 | 70.00th=[ 148], 80.00th=[ 153], 90.00th=[ 169], 95.00th=[ 188], 00:20:06.682 | 99.00th=[ 372], 99.50th=[ 435], 99.90th=[ 634], 99.95th=[ 651], 00:20:06.682 | 99.99th=[ 659] 00:20:06.682 bw ( KiB/s): min=88064, max=200192, per=14.82%, avg=131441.65, stdev=30387.68, samples=20 00:20:06.682 iops : min= 344, max= 782, avg=513.40, stdev=118.73, samples=20 00:20:06.682 lat (msec) : 10=4.43%, 20=1.98%, 50=6.81%, 100=17.03%, 250=66.33% 00:20:06.682 lat (msec) : 500=3.29%, 750=0.13% 00:20:06.682 cpu : usr=1.45%, sys=1.47%, ctx=2750, majf=0, minf=1 00:20:06.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:20:06.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.682 issued rwts: total=0,5197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.682 job1: (groupid=0, jobs=1): err= 0: pid=544083: Wed May 15 06:57:20 2024 00:20:06.682 write: IOPS=452, BW=113MiB/s (119MB/s)(1156MiB/10209msec); 0 zone resets 00:20:06.682 slat (usec): min=19, max=90300, avg=1843.29, stdev=4371.63 00:20:06.682 clat (msec): min=2, max=416, avg=139.32, stdev=53.75 00:20:06.682 lat (msec): min=2, max=416, avg=141.17, stdev=54.15 00:20:06.682 clat percentiles (msec): 00:20:06.682 | 1.00th=[ 19], 5.00th=[ 54], 10.00th=[ 71], 20.00th=[ 114], 00:20:06.682 | 30.00th=[ 130], 40.00th=[ 138], 50.00th=[ 144], 60.00th=[ 148], 00:20:06.682 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 178], 95.00th=[ 213], 00:20:06.682 | 99.00th=[ 368], 99.50th=[ 384], 99.90th=[ 409], 99.95th=[ 409], 00:20:06.682 | 99.99th=[ 418] 00:20:06.682 bw ( KiB/s): min=47104, max=203264, per=13.17%, avg=116772.45, stdev=30562.29, samples=20 00:20:06.682 iops : min= 184, max= 794, avg=456.10, stdev=119.40, samples=20 00:20:06.682 lat (msec) : 4=0.06%, 10=0.32%, 20=0.76%, 50=3.16%, 100=13.75% 00:20:06.682 lat (msec) : 250=78.83%, 500=3.11% 00:20:06.682 cpu : usr=1.28%, sys=1.58%, ctx=1876, majf=0, minf=1 00:20:06.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:20:06.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.682 issued rwts: total=0,4624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.682 job2: (groupid=0, jobs=1): err= 0: pid=544087: Wed May 15 06:57:20 2024 00:20:06.682 write: IOPS=244, BW=61.2MiB/s (64.2MB/s)(627MiB/10230msec); 0 zone resets 00:20:06.682 slat (usec): min=21, max=506074, avg=3213.07, stdev=17039.79 00:20:06.682 clat (msec): min=2, max=1247, avg=257.89, stdev=201.59 00:20:06.682 lat (msec): min=3, max=1266, avg=261.10, stdev=203.30 00:20:06.682 clat percentiles (msec): 00:20:06.682 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 33], 20.00th=[ 44], 00:20:06.682 | 30.00th=[ 100], 40.00th=[ 155], 50.00th=[ 251], 60.00th=[ 296], 00:20:06.682 | 70.00th=[ 359], 80.00th=[ 439], 90.00th=[ 510], 95.00th=[ 609], 00:20:06.682 | 99.00th=[ 751], 99.50th=[ 802], 99.90th=[ 1250], 99.95th=[ 1250], 00:20:06.682 | 99.99th=[ 1250] 00:20:06.682 bw ( KiB/s): min=22528, max=148992, per=7.05%, avg=62526.15, stdev=32669.03, samples=20 00:20:06.682 iops : min= 88, max= 582, avg=244.20, stdev=127.55, samples=20 00:20:06.682 lat (msec) : 4=0.28%, 10=1.80%, 20=3.15%, 50=16.64%, 100=8.22% 00:20:06.682 lat (msec) : 250=19.51%, 500=38.91%, 750=10.38%, 1000=0.64%, 2000=0.48% 00:20:06.682 cpu : usr=0.71%, sys=0.80%, ctx=1310, majf=0, minf=1 00:20:06.682 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:20:06.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.682 issued rwts: total=0,2506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.682 job3: (groupid=0, jobs=1): err= 0: pid=544088: Wed May 15 06:57:20 2024 00:20:06.682 write: IOPS=319, BW=79.9MiB/s (83.8MB/s)(808MiB/10113msec); 0 zone resets 00:20:06.682 slat (usec): min=20, max=214041, avg=2389.91, stdev=9497.85 00:20:06.682 clat (msec): min=3, max=785, avg=197.85, stdev=128.07 00:20:06.682 lat (msec): min=3, max=785, avg=200.24, stdev=129.26 00:20:06.682 clat percentiles (msec): 00:20:06.682 | 1.00th=[ 11], 5.00th=[ 22], 10.00th=[ 44], 20.00th=[ 110], 00:20:06.682 | 30.00th=[ 144], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 180], 00:20:06.682 | 70.00th=[ 222], 80.00th=[ 313], 90.00th=[ 368], 95.00th=[ 439], 00:20:06.682 | 99.00th=[ 609], 99.50th=[ 634], 99.90th=[ 676], 99.95th=[ 785], 00:20:06.682 | 99.99th=[ 785] 00:20:06.682 bw ( KiB/s): min=35840, max=128512, per=9.14%, avg=81080.75, stdev=28434.55, samples=20 00:20:06.682 iops : min= 140, max= 502, avg=316.70, stdev=111.09, samples=20 00:20:06.682 lat (msec) : 4=0.03%, 10=0.77%, 20=3.71%, 50=6.34%, 100=6.65% 00:20:06.682 lat (msec) : 250=56.05%, 500=23.34%, 750=3.00%, 1000=0.09% 00:20:06.682 cpu : usr=0.93%, sys=0.91%, ctx=1604, majf=0, minf=1 00:20:06.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:20:06.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.682 issued rwts: total=0,3231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.682 job4: (groupid=0, jobs=1): err= 0: pid=544089: Wed May 15 06:57:20 2024 00:20:06.682 write: IOPS=250, BW=62.6MiB/s (65.6MB/s)(635MiB/10139msec); 0 zone resets 00:20:06.682 slat (usec): min=22, max=312864, avg=3706.82, stdev=12392.57 00:20:06.682 clat (msec): min=7, max=852, avg=251.75, stdev=168.63 00:20:06.682 lat (msec): min=7, max=852, avg=255.46, stdev=170.70 00:20:06.682 clat percentiles (msec): 00:20:06.682 | 1.00th=[ 16], 5.00th=[ 34], 10.00th=[ 66], 20.00th=[ 84], 00:20:06.682 | 30.00th=[ 120], 40.00th=[ 165], 50.00th=[ 251], 60.00th=[ 292], 00:20:06.682 | 70.00th=[ 338], 80.00th=[ 380], 90.00th=[ 451], 95.00th=[ 567], 00:20:06.682 | 99.00th=[ 776], 99.50th=[ 793], 99.90th=[ 852], 99.95th=[ 852], 00:20:06.682 | 99.99th=[ 852] 00:20:06.682 bw ( KiB/s): min=10240, max=189952, per=7.15%, avg=63385.60, stdev=43147.52, samples=20 00:20:06.682 iops : min= 40, max= 742, avg=247.60, stdev=168.54, samples=20 00:20:06.682 lat (msec) : 10=0.28%, 20=1.46%, 50=5.91%, 100=14.18%, 250=27.92% 00:20:06.682 lat (msec) : 500=43.13%, 750=5.12%, 1000=2.01% 00:20:06.682 cpu : usr=0.89%, sys=0.65%, ctx=974, majf=0, minf=1 00:20:06.682 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:20:06.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.682 issued rwts: total=0,2539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.682 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.682 job5: (groupid=0, jobs=1): err= 0: pid=544091: Wed May 15 06:57:20 2024 00:20:06.682 write: IOPS=173, BW=43.3MiB/s (45.4MB/s)(440MiB/10149msec); 0 zone resets 00:20:06.682 slat (usec): min=26, max=469770, avg=4741.00, stdev=16742.22 00:20:06.683 clat (msec): min=27, max=900, avg=363.74, stdev=138.27 00:20:06.683 lat (msec): min=27, max=900, avg=368.48, stdev=139.57 00:20:06.683 clat percentiles (msec): 00:20:06.683 | 1.00th=[ 52], 5.00th=[ 186], 10.00th=[ 249], 20.00th=[ 284], 00:20:06.683 | 30.00th=[ 300], 40.00th=[ 321], 50.00th=[ 342], 60.00th=[ 363], 00:20:06.683 | 70.00th=[ 397], 80.00th=[ 443], 90.00th=[ 518], 95.00th=[ 693], 00:20:06.683 | 99.00th=[ 793], 99.50th=[ 885], 99.90th=[ 902], 99.95th=[ 902], 00:20:06.683 | 99.99th=[ 902] 00:20:06.683 bw ( KiB/s): min=10240, max=82432, per=4.90%, avg=43417.60, stdev=15207.08, samples=20 00:20:06.683 iops : min= 40, max= 322, avg=169.60, stdev=59.40, samples=20 00:20:06.683 lat (msec) : 50=0.91%, 100=2.56%, 250=6.77%, 500=79.08%, 750=7.28% 00:20:06.683 lat (msec) : 1000=3.41% 00:20:06.683 cpu : usr=0.42%, sys=0.71%, ctx=735, majf=0, minf=1 00:20:06.683 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:20:06.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.683 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.683 issued rwts: total=0,1759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.683 job6: (groupid=0, jobs=1): err= 0: pid=544093: Wed May 15 06:57:20 2024 00:20:06.683 write: IOPS=409, BW=102MiB/s (107MB/s)(1048MiB/10234msec); 0 zone resets 00:20:06.683 slat (usec): min=16, max=276688, avg=2088.97, stdev=6048.11 00:20:06.683 clat (msec): min=3, max=745, avg=154.09, stdev=75.32 00:20:06.683 lat (msec): min=3, max=745, avg=156.18, stdev=76.09 00:20:06.683 clat percentiles (msec): 00:20:06.683 | 1.00th=[ 12], 5.00th=[ 46], 10.00th=[ 104], 20.00th=[ 124], 00:20:06.683 | 30.00th=[ 136], 40.00th=[ 142], 50.00th=[ 146], 60.00th=[ 150], 00:20:06.683 | 70.00th=[ 155], 80.00th=[ 169], 90.00th=[ 194], 95.00th=[ 317], 00:20:06.683 | 99.00th=[ 468], 99.50th=[ 506], 99.90th=[ 726], 99.95th=[ 726], 00:20:06.683 | 99.99th=[ 743] 00:20:06.683 bw ( KiB/s): min=29184, max=163840, per=11.92%, avg=105681.90, stdev=30105.85, samples=20 00:20:06.683 iops : min= 114, max= 640, avg=412.80, stdev=117.64, samples=20 00:20:06.683 lat (msec) : 4=0.05%, 10=0.67%, 20=1.48%, 50=3.22%, 100=4.10% 00:20:06.683 lat (msec) : 250=83.66%, 500=6.30%, 750=0.52% 00:20:06.683 cpu : usr=1.05%, sys=1.15%, ctx=1634, majf=0, minf=1 00:20:06.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:20:06.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.683 issued rwts: total=0,4191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.683 job7: (groupid=0, jobs=1): err= 0: pid=544094: Wed May 15 06:57:20 2024 00:20:06.683 write: IOPS=414, BW=104MiB/s (109MB/s)(1059MiB/10228msec); 0 zone resets 00:20:06.683 slat (usec): min=19, max=538280, avg=1803.81, stdev=10776.83 00:20:06.683 clat (msec): min=2, max=968, avg=152.61, stdev=169.38 00:20:06.683 lat (msec): min=3, max=968, avg=154.41, stdev=171.29 00:20:06.683 clat percentiles (msec): 00:20:06.683 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 17], 20.00th=[ 52], 00:20:06.683 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 96], 60.00th=[ 114], 00:20:06.683 | 70.00th=[ 133], 80.00th=[ 188], 90.00th=[ 414], 95.00th=[ 542], 00:20:06.683 | 99.00th=[ 877], 99.50th=[ 919], 99.90th=[ 969], 99.95th=[ 969], 00:20:06.683 | 99.99th=[ 969] 00:20:06.683 bw ( KiB/s): min= 2048, max=205824, per=12.04%, avg=106803.20, stdev=62308.39, samples=20 00:20:06.683 iops : min= 8, max= 804, avg=417.20, stdev=243.39, samples=20 00:20:06.683 lat (msec) : 4=0.14%, 10=2.72%, 20=9.40%, 50=7.70%, 100=32.00% 00:20:06.683 lat (msec) : 250=32.00%, 500=9.66%, 750=4.82%, 1000=1.58% 00:20:06.683 cpu : usr=1.04%, sys=1.33%, ctx=2304, majf=0, minf=1 00:20:06.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:20:06.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.683 issued rwts: total=0,4235,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.683 job8: (groupid=0, jobs=1): err= 0: pid=544097: Wed May 15 06:57:20 2024 00:20:06.683 write: IOPS=236, BW=59.2MiB/s (62.1MB/s)(599MiB/10114msec); 0 zone resets 00:20:06.683 slat (usec): min=17, max=469689, avg=3426.83, stdev=14274.93 00:20:06.683 clat (msec): min=4, max=851, avg=266.74, stdev=174.76 00:20:06.683 lat (msec): min=5, max=851, avg=270.17, stdev=176.91 00:20:06.683 clat percentiles (msec): 00:20:06.683 | 1.00th=[ 9], 5.00th=[ 42], 10.00th=[ 59], 20.00th=[ 86], 00:20:06.683 | 30.00th=[ 117], 40.00th=[ 199], 50.00th=[ 292], 60.00th=[ 321], 00:20:06.683 | 70.00th=[ 355], 80.00th=[ 397], 90.00th=[ 447], 95.00th=[ 527], 00:20:06.683 | 99.00th=[ 793], 99.50th=[ 802], 99.90th=[ 852], 99.95th=[ 852], 00:20:06.683 | 99.99th=[ 852] 00:20:06.683 bw ( KiB/s): min=10240, max=172032, per=6.73%, avg=59703.65, stdev=40602.49, samples=20 00:20:06.683 iops : min= 40, max= 672, avg=233.20, stdev=158.61, samples=20 00:20:06.683 lat (msec) : 10=1.17%, 20=0.54%, 50=4.13%, 100=17.87%, 250=20.29% 00:20:06.683 lat (msec) : 500=49.94%, 750=3.51%, 1000=2.55% 00:20:06.683 cpu : usr=0.85%, sys=0.81%, ctx=1247, majf=0, minf=1 00:20:06.683 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:20:06.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.683 issued rwts: total=0,2395,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.683 job9: (groupid=0, jobs=1): err= 0: pid=544098: Wed May 15 06:57:20 2024 00:20:06.683 write: IOPS=293, BW=73.4MiB/s (76.9MB/s)(744MiB/10139msec); 0 zone resets 00:20:06.683 slat (usec): min=25, max=467622, avg=2961.33, stdev=14414.39 00:20:06.683 clat (msec): min=2, max=906, avg=214.95, stdev=162.53 00:20:06.683 lat (msec): min=4, max=906, avg=217.91, stdev=164.55 00:20:06.683 clat percentiles (msec): 00:20:06.683 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 44], 20.00th=[ 81], 00:20:06.683 | 30.00th=[ 103], 40.00th=[ 121], 50.00th=[ 171], 60.00th=[ 253], 00:20:06.683 | 70.00th=[ 292], 80.00th=[ 342], 90.00th=[ 409], 95.00th=[ 447], 00:20:06.683 | 99.00th=[ 818], 99.50th=[ 894], 99.90th=[ 911], 99.95th=[ 911], 00:20:06.683 | 99.99th=[ 911] 00:20:06.683 bw ( KiB/s): min= 6144, max=196096, per=8.41%, avg=74572.80, stdev=43295.53, samples=20 00:20:06.683 iops : min= 24, max= 766, avg=291.30, stdev=169.12, samples=20 00:20:06.683 lat (msec) : 4=0.03%, 10=0.50%, 20=2.22%, 50=10.45%, 100=16.03% 00:20:06.683 lat (msec) : 250=30.41%, 500=36.49%, 750=1.85%, 1000=2.02% 00:20:06.683 cpu : usr=0.91%, sys=0.86%, ctx=1267, majf=0, minf=1 00:20:06.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:20:06.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.683 issued rwts: total=0,2976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.683 job10: (groupid=0, jobs=1): err= 0: pid=544099: Wed May 15 06:57:20 2024 00:20:06.683 write: IOPS=176, BW=44.2MiB/s (46.3MB/s)(451MiB/10209msec); 0 zone resets 00:20:06.683 slat (usec): min=15, max=312966, avg=4145.89, stdev=15011.61 00:20:06.683 clat (msec): min=2, max=852, avg=358.05, stdev=152.30 00:20:06.683 lat (msec): min=4, max=852, avg=362.20, stdev=153.86 00:20:06.683 clat percentiles (msec): 00:20:06.683 | 1.00th=[ 16], 5.00th=[ 86], 10.00th=[ 182], 20.00th=[ 262], 00:20:06.683 | 30.00th=[ 296], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 368], 00:20:06.683 | 70.00th=[ 409], 80.00th=[ 456], 90.00th=[ 542], 95.00th=[ 659], 00:20:06.683 | 99.00th=[ 793], 99.50th=[ 802], 99.90th=[ 852], 99.95th=[ 852], 00:20:06.683 | 99.99th=[ 852] 00:20:06.683 bw ( KiB/s): min=10240, max=64512, per=5.02%, avg=44547.20, stdev=12931.38, samples=20 00:20:06.683 iops : min= 40, max= 252, avg=174.00, stdev=50.53, samples=20 00:20:06.683 lat (msec) : 4=0.06%, 10=0.55%, 20=1.22%, 50=2.33%, 100=1.55% 00:20:06.683 lat (msec) : 250=11.31%, 500=69.83%, 750=9.93%, 1000=3.22% 00:20:06.683 cpu : usr=0.47%, sys=0.51%, ctx=918, majf=0, minf=1 00:20:06.683 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:20:06.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.683 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:06.683 issued rwts: total=0,1803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.683 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:06.683 00:20:06.683 Run status group 0 (all jobs): 00:20:06.683 WRITE: bw=866MiB/s (908MB/s), 43.3MiB/s-129MiB/s (45.4MB/s-135MB/s), io=8864MiB (9295MB), run=10074-10234msec 00:20:06.683 00:20:06.683 Disk stats (read/write): 00:20:06.683 nvme0n1: ios=48/10148, merge=0/0, ticks=1324/1221117, in_queue=1222441, util=99.81% 00:20:06.683 nvme10n1: ios=46/9214, merge=0/0, ticks=1014/1234377, in_queue=1235391, util=100.00% 00:20:06.683 nvme1n1: ios=48/4948, merge=0/0, ticks=10537/1101286, in_queue=1111823, util=100.00% 00:20:06.683 nvme2n1: ios=0/6339, merge=0/0, ticks=0/1201108, in_queue=1201108, util=97.67% 00:20:06.683 nvme3n1: ios=0/4910, merge=0/0, ticks=0/1201377, in_queue=1201377, util=97.74% 00:20:06.683 nvme4n1: ios=47/3342, merge=0/0, ticks=492/1203224, in_queue=1203716, util=100.00% 00:20:06.683 nvme5n1: ios=0/8317, merge=0/0, ticks=0/1230546, in_queue=1230546, util=98.30% 00:20:06.683 nvme6n1: ios=38/8424, merge=0/0, ticks=1616/1228075, in_queue=1229691, util=99.89% 00:20:06.683 nvme7n1: ios=38/4579, merge=0/0, ticks=229/1215958, in_queue=1216187, util=99.99% 00:20:06.683 nvme8n1: ios=42/5802, merge=0/0, ticks=4958/1123254, in_queue=1128212, util=100.00% 00:20:06.683 nvme9n1: ios=0/3580, merge=0/0, ticks=0/1242103, in_queue=1242103, util=99.13% 00:20:06.683 06:57:20 -- target/multiconnection.sh@36 -- # sync 00:20:06.683 06:57:20 -- target/multiconnection.sh@37 -- # seq 1 11 00:20:06.683 06:57:20 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:06.683 06:57:20 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:06.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:06.683 06:57:20 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:20:06.683 06:57:20 -- common/autotest_common.sh@1198 -- # local i=0 00:20:06.683 06:57:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:06.683 06:57:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:20:06.683 06:57:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:06.683 06:57:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:20:06.683 06:57:20 -- common/autotest_common.sh@1210 -- # return 0 00:20:06.683 06:57:20 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.683 06:57:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.683 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:20:06.683 06:57:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.683 06:57:20 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:06.684 06:57:20 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:20:06.684 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:20:06.684 06:57:20 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:20:06.684 06:57:20 -- common/autotest_common.sh@1198 -- # local i=0 00:20:06.684 06:57:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:06.684 06:57:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:20:06.684 06:57:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:06.684 06:57:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:20:06.684 06:57:20 -- common/autotest_common.sh@1210 -- # return 0 00:20:06.684 06:57:20 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:06.684 06:57:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.684 06:57:20 -- common/autotest_common.sh@10 -- # set +x 00:20:06.684 06:57:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.684 06:57:20 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:06.684 06:57:20 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:20:06.942 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:20:06.942 06:57:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:20:06.942 06:57:21 -- common/autotest_common.sh@1198 -- # local i=0 00:20:06.942 06:57:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:06.942 06:57:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:20:06.942 06:57:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:06.942 06:57:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:20:07.200 06:57:21 -- common/autotest_common.sh@1210 -- # return 0 00:20:07.200 06:57:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:07.200 06:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.200 06:57:21 -- common/autotest_common.sh@10 -- # set +x 00:20:07.200 06:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.200 06:57:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:07.200 06:57:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:20:07.200 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:20:07.200 06:57:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:20:07.200 06:57:21 -- common/autotest_common.sh@1198 -- # local i=0 00:20:07.200 06:57:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:07.200 06:57:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:20:07.200 06:57:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:07.200 06:57:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:20:07.200 06:57:21 -- common/autotest_common.sh@1210 -- # return 0 00:20:07.200 06:57:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:20:07.200 06:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.200 06:57:21 -- common/autotest_common.sh@10 -- # set +x 00:20:07.200 06:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.200 06:57:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:07.200 06:57:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:20:07.458 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:20:07.458 06:57:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:20:07.458 06:57:21 -- common/autotest_common.sh@1198 -- # local i=0 00:20:07.458 06:57:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:07.458 06:57:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:20:07.458 06:57:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:07.458 06:57:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:20:07.458 06:57:21 -- common/autotest_common.sh@1210 -- # return 0 00:20:07.458 06:57:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:20:07.458 06:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.458 06:57:21 -- common/autotest_common.sh@10 -- # set +x 00:20:07.458 06:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.458 06:57:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:07.458 06:57:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:20:07.715 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:20:07.715 06:57:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:20:07.715 06:57:21 -- common/autotest_common.sh@1198 -- # local i=0 00:20:07.715 06:57:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:07.715 06:57:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:20:07.715 06:57:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:07.715 06:57:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:20:07.715 06:57:21 -- common/autotest_common.sh@1210 -- # return 0 00:20:07.715 06:57:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:20:07.715 06:57:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.716 06:57:21 -- common/autotest_common.sh@10 -- # set +x 00:20:07.716 06:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.716 06:57:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:07.716 06:57:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:20:07.973 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:20:07.973 06:57:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:20:07.973 06:57:22 -- common/autotest_common.sh@1198 -- # local i=0 00:20:07.973 06:57:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:07.973 06:57:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:20:07.973 06:57:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:07.973 06:57:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:20:07.973 06:57:22 -- common/autotest_common.sh@1210 -- # return 0 00:20:07.973 06:57:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:20:07.973 06:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:07.973 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:20:07.973 06:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:07.973 06:57:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:07.973 06:57:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:20:08.231 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:20:08.231 06:57:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:20:08.231 06:57:22 -- common/autotest_common.sh@1198 -- # local i=0 00:20:08.231 06:57:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:08.231 06:57:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:20:08.231 06:57:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:08.231 06:57:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:20:08.231 06:57:22 -- common/autotest_common.sh@1210 -- # return 0 00:20:08.231 06:57:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:20:08.231 06:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.231 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:20:08.231 06:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.231 06:57:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:08.231 06:57:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:20:08.231 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:20:08.231 06:57:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:20:08.231 06:57:22 -- common/autotest_common.sh@1198 -- # local i=0 00:20:08.231 06:57:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:08.231 06:57:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:20:08.488 06:57:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:08.488 06:57:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:20:08.488 06:57:22 -- common/autotest_common.sh@1210 -- # return 0 00:20:08.488 06:57:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:20:08.488 06:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.488 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:20:08.488 06:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.488 06:57:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:08.488 06:57:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:20:08.488 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:20:08.488 06:57:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:20:08.488 06:57:22 -- common/autotest_common.sh@1198 -- # local i=0 00:20:08.488 06:57:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:08.488 06:57:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:20:08.488 06:57:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:08.488 06:57:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:20:08.488 06:57:22 -- common/autotest_common.sh@1210 -- # return 0 00:20:08.488 06:57:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:20:08.488 06:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.488 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:20:08.488 06:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.488 06:57:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:08.488 06:57:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:20:08.488 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:20:08.488 06:57:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:20:08.488 06:57:22 -- common/autotest_common.sh@1198 -- # local i=0 00:20:08.488 06:57:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:08.488 06:57:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:20:08.488 06:57:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:08.488 06:57:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:20:08.488 06:57:22 -- common/autotest_common.sh@1210 -- # return 0 00:20:08.489 06:57:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:20:08.489 06:57:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:08.489 06:57:22 -- common/autotest_common.sh@10 -- # set +x 00:20:08.489 06:57:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:08.489 06:57:22 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:20:08.489 06:57:22 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:20:08.489 06:57:22 -- target/multiconnection.sh@47 -- # nvmftestfini 00:20:08.489 06:57:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:08.489 06:57:22 -- nvmf/common.sh@116 -- # sync 00:20:08.489 06:57:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:08.489 06:57:22 -- nvmf/common.sh@119 -- # set +e 00:20:08.489 06:57:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:08.489 06:57:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:08.489 rmmod nvme_tcp 00:20:08.489 rmmod nvme_fabrics 00:20:08.489 rmmod nvme_keyring 00:20:08.746 06:57:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:08.746 06:57:22 -- nvmf/common.sh@123 -- # set -e 00:20:08.746 06:57:22 -- nvmf/common.sh@124 -- # return 0 00:20:08.746 06:57:22 -- nvmf/common.sh@477 -- # '[' -n 538042 ']' 00:20:08.746 06:57:22 -- nvmf/common.sh@478 -- # killprocess 538042 00:20:08.746 06:57:22 -- common/autotest_common.sh@926 -- # '[' -z 538042 ']' 00:20:08.746 06:57:22 -- common/autotest_common.sh@930 -- # kill -0 538042 00:20:08.746 06:57:22 -- common/autotest_common.sh@931 -- # uname 00:20:08.746 06:57:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:08.746 06:57:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 538042 00:20:08.746 06:57:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:08.746 06:57:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:08.746 06:57:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 538042' 00:20:08.746 killing process with pid 538042 00:20:08.746 06:57:22 -- common/autotest_common.sh@945 -- # kill 538042 00:20:08.746 06:57:22 -- common/autotest_common.sh@950 -- # wait 538042 00:20:09.311 06:57:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:09.311 06:57:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:09.311 06:57:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:09.311 06:57:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.311 06:57:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:09.311 06:57:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.311 06:57:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.311 06:57:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.232 06:57:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:11.232 00:20:11.232 real 1m1.392s 00:20:11.232 user 3m13.789s 00:20:11.232 sys 0m22.286s 00:20:11.232 06:57:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:11.232 06:57:25 -- common/autotest_common.sh@10 -- # set +x 00:20:11.232 ************************************ 00:20:11.232 END TEST nvmf_multiconnection 00:20:11.232 ************************************ 00:20:11.232 06:57:25 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:20:11.232 06:57:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:11.232 06:57:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:11.232 06:57:25 -- common/autotest_common.sh@10 -- # set +x 00:20:11.232 ************************************ 00:20:11.232 START TEST nvmf_initiator_timeout 00:20:11.232 ************************************ 00:20:11.232 06:57:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:20:11.232 * Looking for test storage... 00:20:11.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:11.493 06:57:25 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:11.493 06:57:25 -- nvmf/common.sh@7 -- # uname -s 00:20:11.493 06:57:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.493 06:57:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.493 06:57:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.493 06:57:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.493 06:57:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.493 06:57:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.493 06:57:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.493 06:57:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.493 06:57:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.493 06:57:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.493 06:57:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.493 06:57:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.493 06:57:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.493 06:57:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.493 06:57:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:11.493 06:57:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:11.493 06:57:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.493 06:57:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.493 06:57:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.493 06:57:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.493 06:57:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.493 06:57:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.493 06:57:25 -- paths/export.sh@5 -- # export PATH 00:20:11.493 06:57:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.493 06:57:25 -- nvmf/common.sh@46 -- # : 0 00:20:11.493 06:57:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:11.493 06:57:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:11.493 06:57:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:11.493 06:57:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.493 06:57:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.493 06:57:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:11.493 06:57:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:11.493 06:57:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:11.493 06:57:25 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:11.493 06:57:25 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:11.493 06:57:25 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:20:11.493 06:57:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:11.493 06:57:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.493 06:57:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:11.493 06:57:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:11.493 06:57:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:11.493 06:57:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.493 06:57:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.493 06:57:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.493 06:57:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:11.493 06:57:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:11.493 06:57:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:11.493 06:57:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.028 06:57:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:14.028 06:57:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:14.028 06:57:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:14.028 06:57:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:14.028 06:57:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:14.028 06:57:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:14.028 06:57:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:14.028 06:57:27 -- nvmf/common.sh@294 -- # net_devs=() 00:20:14.028 06:57:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:14.028 06:57:27 -- nvmf/common.sh@295 -- # e810=() 00:20:14.028 06:57:27 -- nvmf/common.sh@295 -- # local -ga e810 00:20:14.028 06:57:27 -- nvmf/common.sh@296 -- # x722=() 00:20:14.028 06:57:27 -- nvmf/common.sh@296 -- # local -ga x722 00:20:14.028 06:57:27 -- nvmf/common.sh@297 -- # mlx=() 00:20:14.028 06:57:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:14.028 06:57:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.028 06:57:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.028 06:57:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.028 06:57:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.028 06:57:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.028 06:57:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.028 06:57:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.028 06:57:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.028 06:57:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.028 06:57:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.028 06:57:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.028 06:57:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:14.028 06:57:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:14.028 06:57:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:14.028 06:57:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:14.028 06:57:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:14.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:14.028 06:57:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:14.028 06:57:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:14.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:14.028 06:57:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:14.028 06:57:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:14.028 06:57:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.028 06:57:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:14.028 06:57:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.028 06:57:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:14.028 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:14.028 06:57:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.028 06:57:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:14.028 06:57:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.028 06:57:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:14.028 06:57:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.028 06:57:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:14.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:14.028 06:57:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.028 06:57:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:14.028 06:57:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:14.028 06:57:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:14.028 06:57:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:14.028 06:57:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.028 06:57:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.028 06:57:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.028 06:57:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:14.028 06:57:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.028 06:57:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.028 06:57:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:14.028 06:57:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.028 06:57:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.028 06:57:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:14.028 06:57:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:14.028 06:57:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.028 06:57:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.028 06:57:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.028 06:57:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.028 06:57:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:14.028 06:57:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.028 06:57:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.028 06:57:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.028 06:57:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:14.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:20:14.028 00:20:14.028 --- 10.0.0.2 ping statistics --- 00:20:14.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.028 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:20:14.028 06:57:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:20:14.028 00:20:14.028 --- 10.0.0.1 ping statistics --- 00:20:14.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.028 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:20:14.028 06:57:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.029 06:57:27 -- nvmf/common.sh@410 -- # return 0 00:20:14.029 06:57:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:14.029 06:57:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.029 06:57:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:14.029 06:57:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:14.029 06:57:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.029 06:57:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:14.029 06:57:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:14.029 06:57:28 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:20:14.029 06:57:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:14.029 06:57:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:14.029 06:57:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.029 06:57:28 -- nvmf/common.sh@469 -- # nvmfpid=547627 00:20:14.029 06:57:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:14.029 06:57:28 -- nvmf/common.sh@470 -- # waitforlisten 547627 00:20:14.029 06:57:28 -- common/autotest_common.sh@819 -- # '[' -z 547627 ']' 00:20:14.029 06:57:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.029 06:57:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:14.029 06:57:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.029 06:57:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:14.029 06:57:28 -- common/autotest_common.sh@10 -- # set +x 00:20:14.029 [2024-05-15 06:57:28.048436] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:14.029 [2024-05-15 06:57:28.048509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.029 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.029 [2024-05-15 06:57:28.123894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.029 [2024-05-15 06:57:28.233842] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:14.029 [2024-05-15 06:57:28.233998] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.029 [2024-05-15 06:57:28.234032] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.029 [2024-05-15 06:57:28.234045] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.029 [2024-05-15 06:57:28.234104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.029 [2024-05-15 06:57:28.234164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.029 [2024-05-15 06:57:28.234213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.029 [2024-05-15 06:57:28.234216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.960 06:57:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:14.960 06:57:29 -- common/autotest_common.sh@852 -- # return 0 00:20:14.960 06:57:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:14.960 06:57:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:14.960 06:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:14.960 06:57:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.960 06:57:29 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:14.960 06:57:29 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.960 06:57:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.960 06:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:14.960 Malloc0 00:20:14.960 06:57:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.960 06:57:29 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:20:14.960 06:57:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.960 06:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:14.960 Delay0 00:20:14.960 06:57:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.960 06:57:29 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.960 06:57:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.960 06:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:14.960 [2024-05-15 06:57:29.092618] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.960 06:57:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.960 06:57:29 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:14.960 06:57:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.960 06:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:14.960 06:57:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.960 06:57:29 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:14.960 06:57:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.960 06:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:14.960 06:57:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.960 06:57:29 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.960 06:57:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:14.960 06:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:14.960 [2024-05-15 06:57:29.120884] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.960 06:57:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:14.960 06:57:29 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:15.892 06:57:29 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:20:15.892 06:57:29 -- common/autotest_common.sh@1177 -- # local i=0 00:20:15.892 06:57:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:15.892 06:57:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:15.892 06:57:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:17.787 06:57:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:17.787 06:57:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:17.787 06:57:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:17.787 06:57:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:17.787 06:57:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:17.787 06:57:31 -- common/autotest_common.sh@1187 -- # return 0 00:20:17.787 06:57:31 -- target/initiator_timeout.sh@35 -- # fio_pid=548196 00:20:17.787 06:57:31 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:20:17.787 06:57:31 -- target/initiator_timeout.sh@37 -- # sleep 3 00:20:17.787 [global] 00:20:17.787 thread=1 00:20:17.787 invalidate=1 00:20:17.787 rw=write 00:20:17.787 time_based=1 00:20:17.787 runtime=60 00:20:17.787 ioengine=libaio 00:20:17.787 direct=1 00:20:17.787 bs=4096 00:20:17.787 iodepth=1 00:20:17.787 norandommap=0 00:20:17.787 numjobs=1 00:20:17.787 00:20:17.787 verify_dump=1 00:20:17.787 verify_backlog=512 00:20:17.787 verify_state_save=0 00:20:17.787 do_verify=1 00:20:17.787 verify=crc32c-intel 00:20:17.787 [job0] 00:20:17.787 filename=/dev/nvme0n1 00:20:17.787 Could not set queue depth (nvme0n1) 00:20:17.787 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:17.787 fio-3.35 00:20:17.787 Starting 1 thread 00:20:21.060 06:57:34 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:20:21.060 06:57:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.060 06:57:34 -- common/autotest_common.sh@10 -- # set +x 00:20:21.060 true 00:20:21.060 06:57:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:21.060 06:57:34 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:20:21.060 06:57:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.060 06:57:34 -- common/autotest_common.sh@10 -- # set +x 00:20:21.060 true 00:20:21.060 06:57:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:21.060 06:57:34 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:20:21.060 06:57:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.060 06:57:34 -- common/autotest_common.sh@10 -- # set +x 00:20:21.060 true 00:20:21.060 06:57:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:21.060 06:57:34 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:20:21.060 06:57:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:21.060 06:57:34 -- common/autotest_common.sh@10 -- # set +x 00:20:21.060 true 00:20:21.060 06:57:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:21.060 06:57:34 -- target/initiator_timeout.sh@45 -- # sleep 3 00:20:23.583 06:57:37 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:20:23.583 06:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.583 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:20:23.840 true 00:20:23.840 06:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.840 06:57:37 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:20:23.840 06:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.840 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:20:23.840 true 00:20:23.840 06:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.840 06:57:37 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:20:23.840 06:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.840 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:20:23.840 true 00:20:23.840 06:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.840 06:57:37 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:20:23.840 06:57:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:23.840 06:57:37 -- common/autotest_common.sh@10 -- # set +x 00:20:23.840 true 00:20:23.840 06:57:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:23.840 06:57:37 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:20:23.840 06:57:37 -- target/initiator_timeout.sh@54 -- # wait 548196 00:21:20.077 00:21:20.077 job0: (groupid=0, jobs=1): err= 0: pid=548267: Wed May 15 06:58:32 2024 00:21:20.077 read: IOPS=132, BW=531KiB/s (544kB/s)(31.1MiB/60041msec) 00:21:20.077 slat (usec): min=6, max=7432, avg=11.34, stdev=112.73 00:21:20.077 clat (usec): min=478, max=45069, avg=2018.01, stdev=7604.84 00:21:20.077 lat (usec): min=486, max=45092, avg=2029.34, stdev=7608.45 00:21:20.077 clat percentiles (usec): 00:21:20.077 | 1.00th=[ 494], 5.00th=[ 506], 10.00th=[ 510], 20.00th=[ 519], 00:21:20.077 | 30.00th=[ 529], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:21:20.077 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 562], 95.00th=[ 578], 00:21:20.077 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:21:20.077 | 99.99th=[44827] 00:21:20.077 write: IOPS=136, BW=546KiB/s (559kB/s)(32.0MiB/60041msec); 0 zone resets 00:21:20.077 slat (usec): min=7, max=29799, avg=15.90, stdev=329.15 00:21:20.077 clat (usec): min=255, max=41103k, avg=5331.91, stdev=454122.57 00:21:20.077 lat (usec): min=263, max=41103k, avg=5347.82, stdev=454122.69 00:21:20.077 clat percentiles (usec): 00:21:20.077 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 00:21:20.077 | 20.00th=[ 285], 30.00th=[ 293], 40.00th=[ 306], 00:21:20.077 | 50.00th=[ 314], 60.00th=[ 322], 70.00th=[ 326], 00:21:20.077 | 80.00th=[ 338], 90.00th=[ 347], 95.00th=[ 363], 00:21:20.077 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 570], 00:21:20.077 | 99.95th=[ 1045], 99.99th=[17112761] 00:21:20.077 bw ( KiB/s): min= 368, max= 5720, per=100.00%, avg=4096.00, stdev=1442.39, samples=16 00:21:20.077 iops : min= 92, max= 1430, avg=1024.00, stdev=360.60, samples=16 00:21:20.077 lat (usec) : 500=51.94%, 750=46.18% 00:21:20.077 lat (msec) : 2=0.07%, 50=1.81%, >=2000=0.01% 00:21:20.077 cpu : usr=0.23%, sys=0.38%, ctx=16170, majf=0, minf=2 00:21:20.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:20.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.077 issued rwts: total=7974,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:20.077 00:21:20.077 Run status group 0 (all jobs): 00:21:20.077 READ: bw=531KiB/s (544kB/s), 531KiB/s-531KiB/s (544kB/s-544kB/s), io=31.1MiB (32.7MB), run=60041-60041msec 00:21:20.077 WRITE: bw=546KiB/s (559kB/s), 546KiB/s-546KiB/s (559kB/s-559kB/s), io=32.0MiB (33.6MB), run=60041-60041msec 00:21:20.077 00:21:20.077 Disk stats (read/write): 00:21:20.077 nvme0n1: ios=8022/8192, merge=0/0, ticks=17187/2541, in_queue=19728, util=99.74% 00:21:20.077 06:58:32 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:20.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:20.077 06:58:32 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:20.077 06:58:32 -- common/autotest_common.sh@1198 -- # local i=0 00:21:20.077 06:58:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:20.077 06:58:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:20.077 06:58:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:20.077 06:58:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:20.077 06:58:32 -- common/autotest_common.sh@1210 -- # return 0 00:21:20.077 06:58:32 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:21:20.077 06:58:32 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:21:20.077 nvmf hotplug test: fio successful as expected 00:21:20.077 06:58:32 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:20.077 06:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.077 06:58:32 -- common/autotest_common.sh@10 -- # set +x 00:21:20.077 06:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.077 06:58:32 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:21:20.077 06:58:32 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:21:20.077 06:58:32 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:21:20.077 06:58:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:20.077 06:58:32 -- nvmf/common.sh@116 -- # sync 00:21:20.077 06:58:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:20.077 06:58:32 -- nvmf/common.sh@119 -- # set +e 00:21:20.077 06:58:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:20.077 06:58:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:20.077 rmmod nvme_tcp 00:21:20.077 rmmod nvme_fabrics 00:21:20.077 rmmod nvme_keyring 00:21:20.077 06:58:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:20.077 06:58:32 -- nvmf/common.sh@123 -- # set -e 00:21:20.077 06:58:32 -- nvmf/common.sh@124 -- # return 0 00:21:20.077 06:58:32 -- nvmf/common.sh@477 -- # '[' -n 547627 ']' 00:21:20.077 06:58:32 -- nvmf/common.sh@478 -- # killprocess 547627 00:21:20.077 06:58:32 -- common/autotest_common.sh@926 -- # '[' -z 547627 ']' 00:21:20.077 06:58:32 -- common/autotest_common.sh@930 -- # kill -0 547627 00:21:20.077 06:58:32 -- common/autotest_common.sh@931 -- # uname 00:21:20.077 06:58:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:20.077 06:58:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 547627 00:21:20.077 06:58:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:20.077 06:58:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:20.077 06:58:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 547627' 00:21:20.077 killing process with pid 547627 00:21:20.077 06:58:32 -- common/autotest_common.sh@945 -- # kill 547627 00:21:20.077 06:58:32 -- common/autotest_common.sh@950 -- # wait 547627 00:21:20.077 06:58:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:20.077 06:58:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:20.077 06:58:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:20.077 06:58:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:20.077 06:58:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:20.077 06:58:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.077 06:58:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.077 06:58:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.644 06:58:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:20.644 00:21:20.644 real 1m9.338s 00:21:20.644 user 4m14.202s 00:21:20.644 sys 0m6.986s 00:21:20.644 06:58:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:20.644 06:58:34 -- common/autotest_common.sh@10 -- # set +x 00:21:20.644 ************************************ 00:21:20.644 END TEST nvmf_initiator_timeout 00:21:20.644 ************************************ 00:21:20.644 06:58:34 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:21:20.644 06:58:34 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:21:20.644 06:58:34 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:21:20.644 06:58:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:20.644 06:58:34 -- common/autotest_common.sh@10 -- # set +x 00:21:23.171 06:58:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:23.171 06:58:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:23.171 06:58:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:23.171 06:58:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:23.171 06:58:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:23.171 06:58:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:23.171 06:58:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:23.171 06:58:37 -- nvmf/common.sh@294 -- # net_devs=() 00:21:23.171 06:58:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:23.171 06:58:37 -- nvmf/common.sh@295 -- # e810=() 00:21:23.171 06:58:37 -- nvmf/common.sh@295 -- # local -ga e810 00:21:23.171 06:58:37 -- nvmf/common.sh@296 -- # x722=() 00:21:23.171 06:58:37 -- nvmf/common.sh@296 -- # local -ga x722 00:21:23.171 06:58:37 -- nvmf/common.sh@297 -- # mlx=() 00:21:23.171 06:58:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:23.171 06:58:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.171 06:58:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.171 06:58:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.171 06:58:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.171 06:58:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.171 06:58:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.171 06:58:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.171 06:58:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.171 06:58:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.171 06:58:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.171 06:58:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.171 06:58:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:23.171 06:58:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:23.171 06:58:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:23.171 06:58:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:23.171 06:58:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:23.171 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:23.171 06:58:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:23.171 06:58:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:23.171 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:23.171 06:58:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:23.171 06:58:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:23.171 06:58:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:23.171 06:58:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.171 06:58:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:23.171 06:58:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.171 06:58:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:23.171 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:23.171 06:58:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.171 06:58:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:23.171 06:58:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.171 06:58:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:23.171 06:58:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.171 06:58:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:23.171 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:23.171 06:58:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.171 06:58:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:23.171 06:58:37 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.171 06:58:37 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:21:23.171 06:58:37 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:23.171 06:58:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:23.172 06:58:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:23.172 06:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:23.172 ************************************ 00:21:23.172 START TEST nvmf_perf_adq 00:21:23.172 ************************************ 00:21:23.172 06:58:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:23.172 * Looking for test storage... 00:21:23.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:23.172 06:58:37 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.172 06:58:37 -- nvmf/common.sh@7 -- # uname -s 00:21:23.172 06:58:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.172 06:58:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.172 06:58:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.172 06:58:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.172 06:58:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.172 06:58:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.172 06:58:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.172 06:58:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.172 06:58:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.172 06:58:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.172 06:58:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.172 06:58:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.172 06:58:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.172 06:58:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.172 06:58:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:23.172 06:58:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:23.172 06:58:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.172 06:58:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.172 06:58:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.172 06:58:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.172 06:58:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.172 06:58:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.172 06:58:37 -- paths/export.sh@5 -- # export PATH 00:21:23.172 06:58:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.172 06:58:37 -- nvmf/common.sh@46 -- # : 0 00:21:23.172 06:58:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:23.172 06:58:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:23.172 06:58:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:23.172 06:58:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.172 06:58:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.172 06:58:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:23.172 06:58:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:23.172 06:58:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:23.172 06:58:37 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:23.172 06:58:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:23.172 06:58:37 -- common/autotest_common.sh@10 -- # set +x 00:21:25.698 06:58:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:25.698 06:58:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:25.698 06:58:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:25.698 06:58:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:25.698 06:58:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:25.698 06:58:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:25.698 06:58:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:25.698 06:58:39 -- nvmf/common.sh@294 -- # net_devs=() 00:21:25.698 06:58:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:25.698 06:58:39 -- nvmf/common.sh@295 -- # e810=() 00:21:25.698 06:58:39 -- nvmf/common.sh@295 -- # local -ga e810 00:21:25.698 06:58:39 -- nvmf/common.sh@296 -- # x722=() 00:21:25.698 06:58:39 -- nvmf/common.sh@296 -- # local -ga x722 00:21:25.698 06:58:39 -- nvmf/common.sh@297 -- # mlx=() 00:21:25.698 06:58:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:25.698 06:58:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.698 06:58:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.698 06:58:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.698 06:58:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.698 06:58:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.698 06:58:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.698 06:58:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.698 06:58:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.698 06:58:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.698 06:58:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.698 06:58:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.698 06:58:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:25.698 06:58:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:25.698 06:58:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:25.698 06:58:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:25.698 06:58:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:25.698 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:25.698 06:58:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:25.698 06:58:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:25.698 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:25.698 06:58:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.698 06:58:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:25.699 06:58:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:25.699 06:58:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:25.699 06:58:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:25.699 06:58:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:25.699 06:58:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.699 06:58:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:25.699 06:58:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.699 06:58:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:25.699 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:25.699 06:58:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.699 06:58:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:25.699 06:58:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.699 06:58:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:25.699 06:58:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.699 06:58:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:25.699 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:25.699 06:58:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.699 06:58:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:25.699 06:58:39 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.699 06:58:39 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:25.699 06:58:39 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:25.699 06:58:39 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:21:25.699 06:58:39 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:26.264 06:58:40 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:27.636 06:58:41 -- target/perf_adq.sh@54 -- # sleep 5 00:21:32.912 06:58:46 -- target/perf_adq.sh@67 -- # nvmftestinit 00:21:32.912 06:58:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:32.912 06:58:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:32.912 06:58:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:32.912 06:58:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:32.912 06:58:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:32.912 06:58:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.912 06:58:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:32.912 06:58:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.912 06:58:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:32.912 06:58:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:32.912 06:58:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:32.912 06:58:46 -- common/autotest_common.sh@10 -- # set +x 00:21:32.912 06:58:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:32.913 06:58:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:32.913 06:58:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:32.913 06:58:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:32.913 06:58:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:32.913 06:58:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:32.913 06:58:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:32.913 06:58:46 -- nvmf/common.sh@294 -- # net_devs=() 00:21:32.913 06:58:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:32.913 06:58:46 -- nvmf/common.sh@295 -- # e810=() 00:21:32.913 06:58:46 -- nvmf/common.sh@295 -- # local -ga e810 00:21:32.913 06:58:46 -- nvmf/common.sh@296 -- # x722=() 00:21:32.913 06:58:46 -- nvmf/common.sh@296 -- # local -ga x722 00:21:32.913 06:58:46 -- nvmf/common.sh@297 -- # mlx=() 00:21:32.913 06:58:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:32.913 06:58:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.913 06:58:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.913 06:58:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.913 06:58:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.913 06:58:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.913 06:58:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.913 06:58:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.913 06:58:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.913 06:58:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.913 06:58:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.913 06:58:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.913 06:58:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:32.913 06:58:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:32.913 06:58:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:32.913 06:58:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:32.913 06:58:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:32.913 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:32.913 06:58:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:32.913 06:58:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:32.913 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:32.913 06:58:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:32.913 06:58:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:32.913 06:58:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.913 06:58:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:32.913 06:58:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.913 06:58:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:32.913 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:32.913 06:58:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.913 06:58:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:32.913 06:58:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.913 06:58:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:32.913 06:58:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.913 06:58:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:32.913 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:32.913 06:58:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.913 06:58:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:32.913 06:58:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:32.913 06:58:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:32.913 06:58:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.913 06:58:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.913 06:58:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.913 06:58:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:32.913 06:58:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.913 06:58:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.913 06:58:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:32.913 06:58:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.913 06:58:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.913 06:58:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:32.913 06:58:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:32.913 06:58:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.913 06:58:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.913 06:58:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.913 06:58:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.913 06:58:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:32.913 06:58:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.913 06:58:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.913 06:58:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.913 06:58:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:32.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:21:32.913 00:21:32.913 --- 10.0.0.2 ping statistics --- 00:21:32.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.913 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:21:32.913 06:58:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:21:32.913 00:21:32.913 --- 10.0.0.1 ping statistics --- 00:21:32.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.913 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:21:32.913 06:58:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.913 06:58:46 -- nvmf/common.sh@410 -- # return 0 00:21:32.913 06:58:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:32.913 06:58:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.913 06:58:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:32.913 06:58:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.913 06:58:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:32.913 06:58:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:32.913 06:58:46 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:32.913 06:58:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:32.913 06:58:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:32.913 06:58:46 -- common/autotest_common.sh@10 -- # set +x 00:21:32.913 06:58:46 -- nvmf/common.sh@469 -- # nvmfpid=560713 00:21:32.913 06:58:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:32.913 06:58:46 -- nvmf/common.sh@470 -- # waitforlisten 560713 00:21:32.913 06:58:46 -- common/autotest_common.sh@819 -- # '[' -z 560713 ']' 00:21:32.913 06:58:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.913 06:58:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:32.913 06:58:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.913 06:58:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:32.913 06:58:46 -- common/autotest_common.sh@10 -- # set +x 00:21:32.913 [2024-05-15 06:58:46.973514] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:32.913 [2024-05-15 06:58:46.973584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.913 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.913 [2024-05-15 06:58:47.051685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.171 [2024-05-15 06:58:47.164323] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:33.171 [2024-05-15 06:58:47.164480] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.171 [2024-05-15 06:58:47.164505] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.171 [2024-05-15 06:58:47.164525] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.171 [2024-05-15 06:58:47.164584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.171 [2024-05-15 06:58:47.164647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.171 [2024-05-15 06:58:47.164718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.171 [2024-05-15 06:58:47.164725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.736 06:58:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:33.736 06:58:47 -- common/autotest_common.sh@852 -- # return 0 00:21:33.736 06:58:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:33.736 06:58:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:33.736 06:58:47 -- common/autotest_common.sh@10 -- # set +x 00:21:33.994 06:58:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.994 06:58:47 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:21:33.994 06:58:47 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:33.994 06:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.994 06:58:47 -- common/autotest_common.sh@10 -- # set +x 00:21:33.994 06:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.994 06:58:47 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:33.994 06:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.994 06:58:47 -- common/autotest_common.sh@10 -- # set +x 00:21:33.994 06:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.994 06:58:48 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:33.994 06:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.994 06:58:48 -- common/autotest_common.sh@10 -- # set +x 00:21:33.994 [2024-05-15 06:58:48.096544] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.994 06:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.994 06:58:48 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:33.994 06:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.994 06:58:48 -- common/autotest_common.sh@10 -- # set +x 00:21:33.994 Malloc1 00:21:33.994 06:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.994 06:58:48 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:33.994 06:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.994 06:58:48 -- common/autotest_common.sh@10 -- # set +x 00:21:33.994 06:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.994 06:58:48 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:33.994 06:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.994 06:58:48 -- common/autotest_common.sh@10 -- # set +x 00:21:33.994 06:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.994 06:58:48 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.994 06:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.994 06:58:48 -- common/autotest_common.sh@10 -- # set +x 00:21:33.994 [2024-05-15 06:58:48.147700] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.994 06:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.994 06:58:48 -- target/perf_adq.sh@73 -- # perfpid=560940 00:21:33.994 06:58:48 -- target/perf_adq.sh@74 -- # sleep 2 00:21:33.994 06:58:48 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:33.994 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.523 06:58:50 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:21:36.523 06:58:50 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:36.523 06:58:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:36.523 06:58:50 -- target/perf_adq.sh@76 -- # wc -l 00:21:36.523 06:58:50 -- common/autotest_common.sh@10 -- # set +x 00:21:36.523 06:58:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:36.523 06:58:50 -- target/perf_adq.sh@76 -- # count=4 00:21:36.523 06:58:50 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:21:36.523 06:58:50 -- target/perf_adq.sh@81 -- # wait 560940 00:21:44.700 Initializing NVMe Controllers 00:21:44.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:44.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:44.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:44.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:44.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:44.700 Initialization complete. Launching workers. 00:21:44.700 ======================================================== 00:21:44.700 Latency(us) 00:21:44.700 Device Information : IOPS MiB/s Average min max 00:21:44.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11513.30 44.97 5558.95 899.39 8590.42 00:21:44.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11709.80 45.74 5475.98 1734.84 46419.48 00:21:44.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6994.80 27.32 9153.78 3624.47 13923.19 00:21:44.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11780.20 46.02 5433.74 1059.70 8385.25 00:21:44.700 ======================================================== 00:21:44.700 Total : 41998.10 164.06 6099.42 899.39 46419.48 00:21:44.700 00:21:44.700 06:58:58 -- target/perf_adq.sh@82 -- # nvmftestfini 00:21:44.700 06:58:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:44.700 06:58:58 -- nvmf/common.sh@116 -- # sync 00:21:44.700 06:58:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:44.700 06:58:58 -- nvmf/common.sh@119 -- # set +e 00:21:44.700 06:58:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:44.700 06:58:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:44.700 rmmod nvme_tcp 00:21:44.700 rmmod nvme_fabrics 00:21:44.700 rmmod nvme_keyring 00:21:44.700 06:58:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:44.700 06:58:58 -- nvmf/common.sh@123 -- # set -e 00:21:44.700 06:58:58 -- nvmf/common.sh@124 -- # return 0 00:21:44.700 06:58:58 -- nvmf/common.sh@477 -- # '[' -n 560713 ']' 00:21:44.700 06:58:58 -- nvmf/common.sh@478 -- # killprocess 560713 00:21:44.700 06:58:58 -- common/autotest_common.sh@926 -- # '[' -z 560713 ']' 00:21:44.700 06:58:58 -- common/autotest_common.sh@930 -- # kill -0 560713 00:21:44.700 06:58:58 -- common/autotest_common.sh@931 -- # uname 00:21:44.700 06:58:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:44.700 06:58:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 560713 00:21:44.700 06:58:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:44.700 06:58:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:44.700 06:58:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 560713' 00:21:44.700 killing process with pid 560713 00:21:44.700 06:58:58 -- common/autotest_common.sh@945 -- # kill 560713 00:21:44.700 06:58:58 -- common/autotest_common.sh@950 -- # wait 560713 00:21:44.700 06:58:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:44.700 06:58:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:44.700 06:58:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:44.700 06:58:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.700 06:58:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:44.700 06:58:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.700 06:58:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.700 06:58:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.605 06:59:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:46.605 06:59:00 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:21:46.605 06:59:00 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:47.173 06:59:01 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:48.547 06:59:02 -- target/perf_adq.sh@54 -- # sleep 5 00:21:53.813 06:59:07 -- target/perf_adq.sh@87 -- # nvmftestinit 00:21:53.813 06:59:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:53.813 06:59:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.813 06:59:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:53.813 06:59:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:53.813 06:59:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:53.813 06:59:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.813 06:59:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.813 06:59:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.813 06:59:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:53.813 06:59:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:53.813 06:59:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:53.813 06:59:07 -- common/autotest_common.sh@10 -- # set +x 00:21:53.813 06:59:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:53.813 06:59:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:53.813 06:59:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:53.813 06:59:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:53.813 06:59:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:53.813 06:59:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:53.813 06:59:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:53.813 06:59:07 -- nvmf/common.sh@294 -- # net_devs=() 00:21:53.813 06:59:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:53.813 06:59:07 -- nvmf/common.sh@295 -- # e810=() 00:21:53.813 06:59:07 -- nvmf/common.sh@295 -- # local -ga e810 00:21:53.813 06:59:07 -- nvmf/common.sh@296 -- # x722=() 00:21:53.813 06:59:07 -- nvmf/common.sh@296 -- # local -ga x722 00:21:53.813 06:59:07 -- nvmf/common.sh@297 -- # mlx=() 00:21:53.813 06:59:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:53.813 06:59:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.813 06:59:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.813 06:59:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.813 06:59:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.813 06:59:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.813 06:59:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.813 06:59:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.813 06:59:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.813 06:59:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.813 06:59:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.813 06:59:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.813 06:59:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:53.813 06:59:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:53.813 06:59:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:53.813 06:59:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:53.813 06:59:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:53.813 06:59:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:53.813 06:59:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:53.813 06:59:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:53.813 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:53.813 06:59:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:53.813 06:59:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:53.813 06:59:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.813 06:59:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.813 06:59:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:53.813 06:59:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:53.814 06:59:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:53.814 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:53.814 06:59:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:53.814 06:59:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:53.814 06:59:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.814 06:59:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.814 06:59:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:53.814 06:59:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:53.814 06:59:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:53.814 06:59:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:53.814 06:59:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:53.814 06:59:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.814 06:59:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:53.814 06:59:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.814 06:59:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:53.814 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:53.814 06:59:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.814 06:59:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:53.814 06:59:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.814 06:59:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:53.814 06:59:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.814 06:59:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:53.814 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:53.814 06:59:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.814 06:59:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:53.814 06:59:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:53.814 06:59:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:53.814 06:59:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:53.814 06:59:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:53.814 06:59:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.814 06:59:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.814 06:59:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.814 06:59:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:53.814 06:59:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.814 06:59:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.814 06:59:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:53.814 06:59:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.814 06:59:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.814 06:59:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:53.814 06:59:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:53.814 06:59:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.814 06:59:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.814 06:59:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.814 06:59:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.814 06:59:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:53.814 06:59:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.814 06:59:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.814 06:59:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.814 06:59:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:53.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:21:53.814 00:21:53.814 --- 10.0.0.2 ping statistics --- 00:21:53.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.814 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:21:53.814 06:59:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:21:53.814 00:21:53.814 --- 10.0.0.1 ping statistics --- 00:21:53.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.814 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:21:53.814 06:59:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.814 06:59:07 -- nvmf/common.sh@410 -- # return 0 00:21:53.814 06:59:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:53.814 06:59:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.814 06:59:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:53.814 06:59:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:53.814 06:59:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.814 06:59:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:53.814 06:59:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:53.814 06:59:07 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:21:53.814 06:59:07 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:53.814 06:59:07 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:53.814 06:59:07 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:53.814 net.core.busy_poll = 1 00:21:53.814 06:59:07 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:53.814 net.core.busy_read = 1 00:21:53.814 06:59:07 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:53.814 06:59:07 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:53.814 06:59:07 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:53.814 06:59:07 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:53.814 06:59:07 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:53.814 06:59:08 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:53.814 06:59:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:53.814 06:59:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:53.814 06:59:08 -- common/autotest_common.sh@10 -- # set +x 00:21:53.814 06:59:08 -- nvmf/common.sh@469 -- # nvmfpid=563497 00:21:53.814 06:59:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:53.814 06:59:08 -- nvmf/common.sh@470 -- # waitforlisten 563497 00:21:53.814 06:59:08 -- common/autotest_common.sh@819 -- # '[' -z 563497 ']' 00:21:53.814 06:59:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.814 06:59:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:53.814 06:59:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.814 06:59:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:53.814 06:59:08 -- common/autotest_common.sh@10 -- # set +x 00:21:54.072 [2024-05-15 06:59:08.058222] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:21:54.072 [2024-05-15 06:59:08.058296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.072 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.072 [2024-05-15 06:59:08.134928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:54.072 [2024-05-15 06:59:08.242901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:54.072 [2024-05-15 06:59:08.243098] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.072 [2024-05-15 06:59:08.243123] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.072 [2024-05-15 06:59:08.243141] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.072 [2024-05-15 06:59:08.243201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.072 [2024-05-15 06:59:08.243288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.072 [2024-05-15 06:59:08.243405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:54.072 [2024-05-15 06:59:08.243414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.003 06:59:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:55.003 06:59:08 -- common/autotest_common.sh@852 -- # return 0 00:21:55.003 06:59:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:55.003 06:59:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:55.003 06:59:08 -- common/autotest_common.sh@10 -- # set +x 00:21:55.003 06:59:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.003 06:59:09 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:21:55.003 06:59:09 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:55.003 06:59:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.003 06:59:09 -- common/autotest_common.sh@10 -- # set +x 00:21:55.003 06:59:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.003 06:59:09 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:55.003 06:59:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.003 06:59:09 -- common/autotest_common.sh@10 -- # set +x 00:21:55.003 06:59:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.003 06:59:09 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:55.003 06:59:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.003 06:59:09 -- common/autotest_common.sh@10 -- # set +x 00:21:55.003 [2024-05-15 06:59:09.135888] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.003 06:59:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.003 06:59:09 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:55.003 06:59:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.003 06:59:09 -- common/autotest_common.sh@10 -- # set +x 00:21:55.003 Malloc1 00:21:55.003 06:59:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.003 06:59:09 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:55.003 06:59:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.003 06:59:09 -- common/autotest_common.sh@10 -- # set +x 00:21:55.003 06:59:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.003 06:59:09 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:55.003 06:59:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.003 06:59:09 -- common/autotest_common.sh@10 -- # set +x 00:21:55.003 06:59:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.003 06:59:09 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:55.003 06:59:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.003 06:59:09 -- common/autotest_common.sh@10 -- # set +x 00:21:55.003 [2024-05-15 06:59:09.188966] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.004 06:59:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.004 06:59:09 -- target/perf_adq.sh@94 -- # perfpid=563656 00:21:55.004 06:59:09 -- target/perf_adq.sh@95 -- # sleep 2 00:21:55.004 06:59:09 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:55.004 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.550 06:59:11 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:21:57.550 06:59:11 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:57.550 06:59:11 -- target/perf_adq.sh@97 -- # wc -l 00:21:57.550 06:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:57.550 06:59:11 -- common/autotest_common.sh@10 -- # set +x 00:21:57.550 06:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:57.550 06:59:11 -- target/perf_adq.sh@97 -- # count=2 00:21:57.550 06:59:11 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:21:57.550 06:59:11 -- target/perf_adq.sh@103 -- # wait 563656 00:22:05.655 Initializing NVMe Controllers 00:22:05.655 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:05.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:05.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:05.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:05.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:05.655 Initialization complete. Launching workers. 00:22:05.655 ======================================================== 00:22:05.655 Latency(us) 00:22:05.655 Device Information : IOPS MiB/s Average min max 00:22:05.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8318.39 32.49 7709.37 1729.88 52697.25 00:22:05.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4533.73 17.71 14140.95 1887.30 57417.91 00:22:05.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7423.93 29.00 8645.30 1512.34 52427.07 00:22:05.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6650.26 25.98 9623.52 1722.09 58415.86 00:22:05.655 ======================================================== 00:22:05.655 Total : 26926.31 105.18 9523.10 1512.34 58415.86 00:22:05.655 00:22:05.655 06:59:19 -- target/perf_adq.sh@104 -- # nvmftestfini 00:22:05.655 06:59:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:05.655 06:59:19 -- nvmf/common.sh@116 -- # sync 00:22:05.655 06:59:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:05.655 06:59:19 -- nvmf/common.sh@119 -- # set +e 00:22:05.655 06:59:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:05.655 06:59:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:05.655 rmmod nvme_tcp 00:22:05.655 rmmod nvme_fabrics 00:22:05.655 rmmod nvme_keyring 00:22:05.655 06:59:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:05.655 06:59:19 -- nvmf/common.sh@123 -- # set -e 00:22:05.655 06:59:19 -- nvmf/common.sh@124 -- # return 0 00:22:05.655 06:59:19 -- nvmf/common.sh@477 -- # '[' -n 563497 ']' 00:22:05.655 06:59:19 -- nvmf/common.sh@478 -- # killprocess 563497 00:22:05.655 06:59:19 -- common/autotest_common.sh@926 -- # '[' -z 563497 ']' 00:22:05.655 06:59:19 -- common/autotest_common.sh@930 -- # kill -0 563497 00:22:05.655 06:59:19 -- common/autotest_common.sh@931 -- # uname 00:22:05.655 06:59:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:05.655 06:59:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 563497 00:22:05.655 06:59:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:05.655 06:59:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:05.655 06:59:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 563497' 00:22:05.655 killing process with pid 563497 00:22:05.655 06:59:19 -- common/autotest_common.sh@945 -- # kill 563497 00:22:05.655 06:59:19 -- common/autotest_common.sh@950 -- # wait 563497 00:22:05.655 06:59:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:05.655 06:59:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:05.655 06:59:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:05.655 06:59:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:05.655 06:59:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:05.655 06:59:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.655 06:59:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.655 06:59:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.977 06:59:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:08.978 06:59:22 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:22:08.978 00:22:08.978 real 0m45.665s 00:22:08.978 user 2m30.971s 00:22:08.978 sys 0m15.425s 00:22:08.978 06:59:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.978 06:59:22 -- common/autotest_common.sh@10 -- # set +x 00:22:08.978 ************************************ 00:22:08.978 END TEST nvmf_perf_adq 00:22:08.978 ************************************ 00:22:08.978 06:59:22 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:08.978 06:59:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:08.978 06:59:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:08.978 06:59:22 -- common/autotest_common.sh@10 -- # set +x 00:22:08.978 ************************************ 00:22:08.978 START TEST nvmf_shutdown 00:22:08.978 ************************************ 00:22:08.978 06:59:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:08.978 * Looking for test storage... 00:22:08.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:08.978 06:59:22 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.978 06:59:22 -- nvmf/common.sh@7 -- # uname -s 00:22:08.978 06:59:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.978 06:59:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.978 06:59:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.978 06:59:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.978 06:59:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.978 06:59:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.978 06:59:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.978 06:59:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.978 06:59:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.978 06:59:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.978 06:59:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.978 06:59:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.978 06:59:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.978 06:59:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.978 06:59:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.978 06:59:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.978 06:59:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.978 06:59:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.978 06:59:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.978 06:59:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.978 06:59:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.978 06:59:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.978 06:59:22 -- paths/export.sh@5 -- # export PATH 00:22:08.978 06:59:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.978 06:59:22 -- nvmf/common.sh@46 -- # : 0 00:22:08.978 06:59:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:08.978 06:59:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:08.978 06:59:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:08.978 06:59:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.978 06:59:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.978 06:59:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:08.978 06:59:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:08.978 06:59:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:08.978 06:59:22 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:08.978 06:59:22 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:08.978 06:59:22 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:08.978 06:59:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:08.978 06:59:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:08.978 06:59:22 -- common/autotest_common.sh@10 -- # set +x 00:22:08.978 ************************************ 00:22:08.978 START TEST nvmf_shutdown_tc1 00:22:08.978 ************************************ 00:22:08.978 06:59:22 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:22:08.978 06:59:22 -- target/shutdown.sh@74 -- # starttarget 00:22:08.978 06:59:22 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:08.978 06:59:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:08.978 06:59:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.978 06:59:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:08.978 06:59:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:08.978 06:59:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:08.978 06:59:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.978 06:59:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.978 06:59:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.978 06:59:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:08.978 06:59:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:08.978 06:59:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:08.978 06:59:22 -- common/autotest_common.sh@10 -- # set +x 00:22:11.505 06:59:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:11.505 06:59:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:11.505 06:59:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:11.505 06:59:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:11.505 06:59:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:11.505 06:59:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:11.505 06:59:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:11.505 06:59:25 -- nvmf/common.sh@294 -- # net_devs=() 00:22:11.505 06:59:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:11.505 06:59:25 -- nvmf/common.sh@295 -- # e810=() 00:22:11.505 06:59:25 -- nvmf/common.sh@295 -- # local -ga e810 00:22:11.505 06:59:25 -- nvmf/common.sh@296 -- # x722=() 00:22:11.505 06:59:25 -- nvmf/common.sh@296 -- # local -ga x722 00:22:11.505 06:59:25 -- nvmf/common.sh@297 -- # mlx=() 00:22:11.505 06:59:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:11.505 06:59:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.505 06:59:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.505 06:59:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.505 06:59:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.505 06:59:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.505 06:59:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.505 06:59:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.505 06:59:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.505 06:59:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.505 06:59:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.505 06:59:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.505 06:59:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:11.505 06:59:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:11.505 06:59:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:11.505 06:59:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:11.505 06:59:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:11.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:11.505 06:59:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:11.505 06:59:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:11.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:11.505 06:59:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:11.505 06:59:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:11.505 06:59:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.505 06:59:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:11.505 06:59:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.505 06:59:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:11.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:11.505 06:59:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.505 06:59:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:11.505 06:59:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.505 06:59:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:11.505 06:59:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.505 06:59:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:11.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:11.505 06:59:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.505 06:59:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:11.505 06:59:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:11.505 06:59:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:11.505 06:59:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:11.505 06:59:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.505 06:59:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.505 06:59:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.505 06:59:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:11.505 06:59:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.505 06:59:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.505 06:59:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:11.505 06:59:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.505 06:59:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.505 06:59:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:11.505 06:59:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:11.505 06:59:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.505 06:59:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.505 06:59:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.505 06:59:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.505 06:59:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:11.505 06:59:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.505 06:59:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.505 06:59:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.505 06:59:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:11.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:22:11.505 00:22:11.505 --- 10.0.0.2 ping statistics --- 00:22:11.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.505 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:11.505 06:59:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:22:11.506 00:22:11.506 --- 10.0.0.1 ping statistics --- 00:22:11.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.506 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:22:11.506 06:59:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.506 06:59:25 -- nvmf/common.sh@410 -- # return 0 00:22:11.506 06:59:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:11.506 06:59:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.506 06:59:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:11.506 06:59:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:11.506 06:59:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.506 06:59:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:11.506 06:59:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:11.506 06:59:25 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:11.506 06:59:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:11.506 06:59:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:11.506 06:59:25 -- common/autotest_common.sh@10 -- # set +x 00:22:11.506 06:59:25 -- nvmf/common.sh@469 -- # nvmfpid=567294 00:22:11.506 06:59:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:11.506 06:59:25 -- nvmf/common.sh@470 -- # waitforlisten 567294 00:22:11.506 06:59:25 -- common/autotest_common.sh@819 -- # '[' -z 567294 ']' 00:22:11.506 06:59:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.506 06:59:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:11.506 06:59:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.506 06:59:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:11.506 06:59:25 -- common/autotest_common.sh@10 -- # set +x 00:22:11.506 [2024-05-15 06:59:25.511597] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:11.506 [2024-05-15 06:59:25.511678] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.506 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.506 [2024-05-15 06:59:25.593796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.506 [2024-05-15 06:59:25.702241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:11.506 [2024-05-15 06:59:25.702378] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.506 [2024-05-15 06:59:25.702401] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.506 [2024-05-15 06:59:25.702413] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.506 [2024-05-15 06:59:25.702493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.506 [2024-05-15 06:59:25.702572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.506 [2024-05-15 06:59:25.702634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:11.506 [2024-05-15 06:59:25.702636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.438 06:59:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:12.438 06:59:26 -- common/autotest_common.sh@852 -- # return 0 00:22:12.438 06:59:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:12.438 06:59:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:12.438 06:59:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.438 06:59:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.438 06:59:26 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.438 06:59:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.438 06:59:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.438 [2024-05-15 06:59:26.534581] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.438 06:59:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.438 06:59:26 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:12.438 06:59:26 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:12.438 06:59:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:12.438 06:59:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.438 06:59:26 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.438 06:59:26 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.438 06:59:26 -- target/shutdown.sh@28 -- # cat 00:22:12.438 06:59:26 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.438 06:59:26 -- target/shutdown.sh@28 -- # cat 00:22:12.438 06:59:26 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.438 06:59:26 -- target/shutdown.sh@28 -- # cat 00:22:12.438 06:59:26 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.438 06:59:26 -- target/shutdown.sh@28 -- # cat 00:22:12.438 06:59:26 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.438 06:59:26 -- target/shutdown.sh@28 -- # cat 00:22:12.438 06:59:26 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.438 06:59:26 -- target/shutdown.sh@28 -- # cat 00:22:12.438 06:59:26 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.438 06:59:26 -- target/shutdown.sh@28 -- # cat 00:22:12.438 06:59:26 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.438 06:59:26 -- target/shutdown.sh@28 -- # cat 00:22:12.438 06:59:26 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.438 06:59:26 -- target/shutdown.sh@28 -- # cat 00:22:12.438 06:59:26 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.438 06:59:26 -- target/shutdown.sh@28 -- # cat 00:22:12.438 06:59:26 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:12.438 06:59:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.438 06:59:26 -- common/autotest_common.sh@10 -- # set +x 00:22:12.438 Malloc1 00:22:12.438 [2024-05-15 06:59:26.620121] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.438 Malloc2 00:22:12.696 Malloc3 00:22:12.696 Malloc4 00:22:12.696 Malloc5 00:22:12.696 Malloc6 00:22:12.696 Malloc7 00:22:12.954 Malloc8 00:22:12.954 Malloc9 00:22:12.954 Malloc10 00:22:12.954 06:59:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.954 06:59:27 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:12.954 06:59:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:12.954 06:59:27 -- common/autotest_common.sh@10 -- # set +x 00:22:12.954 06:59:27 -- target/shutdown.sh@78 -- # perfpid=567609 00:22:12.954 06:59:27 -- target/shutdown.sh@79 -- # waitforlisten 567609 /var/tmp/bdevperf.sock 00:22:12.954 06:59:27 -- common/autotest_common.sh@819 -- # '[' -z 567609 ']' 00:22:12.954 06:59:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.954 06:59:27 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:12.954 06:59:27 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:12.954 06:59:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:12.954 06:59:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.954 06:59:27 -- nvmf/common.sh@520 -- # config=() 00:22:12.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.954 06:59:27 -- nvmf/common.sh@520 -- # local subsystem config 00:22:12.954 06:59:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:12.954 06:59:27 -- common/autotest_common.sh@10 -- # set +x 00:22:12.954 06:59:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:12.954 06:59:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:12.954 { 00:22:12.954 "params": { 00:22:12.954 "name": "Nvme$subsystem", 00:22:12.954 "trtype": "$TEST_TRANSPORT", 00:22:12.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.954 "adrfam": "ipv4", 00:22:12.954 "trsvcid": "$NVMF_PORT", 00:22:12.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.954 "hdgst": ${hdgst:-false}, 00:22:12.954 "ddgst": ${ddgst:-false} 00:22:12.954 }, 00:22:12.954 "method": "bdev_nvme_attach_controller" 00:22:12.954 } 00:22:12.954 EOF 00:22:12.954 )") 00:22:12.954 06:59:27 -- nvmf/common.sh@542 -- # cat 00:22:12.954 06:59:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:12.954 06:59:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:12.954 { 00:22:12.954 "params": { 00:22:12.954 "name": "Nvme$subsystem", 00:22:12.954 "trtype": "$TEST_TRANSPORT", 00:22:12.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.954 "adrfam": "ipv4", 00:22:12.954 "trsvcid": "$NVMF_PORT", 00:22:12.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.954 "hdgst": ${hdgst:-false}, 00:22:12.954 "ddgst": ${ddgst:-false} 00:22:12.954 }, 00:22:12.954 "method": "bdev_nvme_attach_controller" 00:22:12.954 } 00:22:12.954 EOF 00:22:12.954 )") 00:22:12.954 06:59:27 -- nvmf/common.sh@542 -- # cat 00:22:12.954 06:59:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:12.954 06:59:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:12.954 { 00:22:12.954 "params": { 00:22:12.954 "name": "Nvme$subsystem", 00:22:12.954 "trtype": "$TEST_TRANSPORT", 00:22:12.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.954 "adrfam": "ipv4", 00:22:12.954 "trsvcid": "$NVMF_PORT", 00:22:12.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.954 "hdgst": ${hdgst:-false}, 00:22:12.954 "ddgst": ${ddgst:-false} 00:22:12.954 }, 00:22:12.954 "method": "bdev_nvme_attach_controller" 00:22:12.954 } 00:22:12.954 EOF 00:22:12.954 )") 00:22:12.954 06:59:27 -- nvmf/common.sh@542 -- # cat 00:22:12.954 06:59:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:12.954 06:59:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:12.954 { 00:22:12.954 "params": { 00:22:12.954 "name": "Nvme$subsystem", 00:22:12.954 "trtype": "$TEST_TRANSPORT", 00:22:12.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.954 "adrfam": "ipv4", 00:22:12.954 "trsvcid": "$NVMF_PORT", 00:22:12.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.954 "hdgst": ${hdgst:-false}, 00:22:12.954 "ddgst": ${ddgst:-false} 00:22:12.954 }, 00:22:12.954 "method": "bdev_nvme_attach_controller" 00:22:12.954 } 00:22:12.954 EOF 00:22:12.954 )") 00:22:12.954 06:59:27 -- nvmf/common.sh@542 -- # cat 00:22:12.954 06:59:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:12.954 06:59:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:12.954 { 00:22:12.954 "params": { 00:22:12.954 "name": "Nvme$subsystem", 00:22:12.954 "trtype": "$TEST_TRANSPORT", 00:22:12.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.954 "adrfam": "ipv4", 00:22:12.954 "trsvcid": "$NVMF_PORT", 00:22:12.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.954 "hdgst": ${hdgst:-false}, 00:22:12.954 "ddgst": ${ddgst:-false} 00:22:12.954 }, 00:22:12.954 "method": "bdev_nvme_attach_controller" 00:22:12.954 } 00:22:12.954 EOF 00:22:12.954 )") 00:22:12.954 06:59:27 -- nvmf/common.sh@542 -- # cat 00:22:12.954 06:59:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:12.954 06:59:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:12.954 { 00:22:12.954 "params": { 00:22:12.954 "name": "Nvme$subsystem", 00:22:12.954 "trtype": "$TEST_TRANSPORT", 00:22:12.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.954 "adrfam": "ipv4", 00:22:12.954 "trsvcid": "$NVMF_PORT", 00:22:12.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.955 "hdgst": ${hdgst:-false}, 00:22:12.955 "ddgst": ${ddgst:-false} 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 } 00:22:12.955 EOF 00:22:12.955 )") 00:22:12.955 06:59:27 -- nvmf/common.sh@542 -- # cat 00:22:12.955 06:59:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:12.955 06:59:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:12.955 { 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme$subsystem", 00:22:12.955 "trtype": "$TEST_TRANSPORT", 00:22:12.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "$NVMF_PORT", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.955 "hdgst": ${hdgst:-false}, 00:22:12.955 "ddgst": ${ddgst:-false} 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 } 00:22:12.955 EOF 00:22:12.955 )") 00:22:12.955 06:59:27 -- nvmf/common.sh@542 -- # cat 00:22:12.955 06:59:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:12.955 06:59:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:12.955 { 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme$subsystem", 00:22:12.955 "trtype": "$TEST_TRANSPORT", 00:22:12.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "$NVMF_PORT", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.955 "hdgst": ${hdgst:-false}, 00:22:12.955 "ddgst": ${ddgst:-false} 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 } 00:22:12.955 EOF 00:22:12.955 )") 00:22:12.955 06:59:27 -- nvmf/common.sh@542 -- # cat 00:22:12.955 06:59:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:12.955 06:59:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:12.955 { 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme$subsystem", 00:22:12.955 "trtype": "$TEST_TRANSPORT", 00:22:12.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "$NVMF_PORT", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.955 "hdgst": ${hdgst:-false}, 00:22:12.955 "ddgst": ${ddgst:-false} 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 } 00:22:12.955 EOF 00:22:12.955 )") 00:22:12.955 06:59:27 -- nvmf/common.sh@542 -- # cat 00:22:12.955 06:59:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:12.955 06:59:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:12.955 { 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme$subsystem", 00:22:12.955 "trtype": "$TEST_TRANSPORT", 00:22:12.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "$NVMF_PORT", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.955 "hdgst": ${hdgst:-false}, 00:22:12.955 "ddgst": ${ddgst:-false} 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 } 00:22:12.955 EOF 00:22:12.955 )") 00:22:12.955 06:59:27 -- nvmf/common.sh@542 -- # cat 00:22:12.955 06:59:27 -- nvmf/common.sh@544 -- # jq . 00:22:12.955 06:59:27 -- nvmf/common.sh@545 -- # IFS=, 00:22:12.955 06:59:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme1", 00:22:12.955 "trtype": "tcp", 00:22:12.955 "traddr": "10.0.0.2", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "4420", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.955 "hdgst": false, 00:22:12.955 "ddgst": false 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 },{ 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme2", 00:22:12.955 "trtype": "tcp", 00:22:12.955 "traddr": "10.0.0.2", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "4420", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:12.955 "hdgst": false, 00:22:12.955 "ddgst": false 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 },{ 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme3", 00:22:12.955 "trtype": "tcp", 00:22:12.955 "traddr": "10.0.0.2", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "4420", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:12.955 "hdgst": false, 00:22:12.955 "ddgst": false 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 },{ 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme4", 00:22:12.955 "trtype": "tcp", 00:22:12.955 "traddr": "10.0.0.2", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "4420", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:12.955 "hdgst": false, 00:22:12.955 "ddgst": false 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 },{ 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme5", 00:22:12.955 "trtype": "tcp", 00:22:12.955 "traddr": "10.0.0.2", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "4420", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:12.955 "hdgst": false, 00:22:12.955 "ddgst": false 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 },{ 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme6", 00:22:12.955 "trtype": "tcp", 00:22:12.955 "traddr": "10.0.0.2", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "4420", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:12.955 "hdgst": false, 00:22:12.955 "ddgst": false 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 },{ 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme7", 00:22:12.955 "trtype": "tcp", 00:22:12.955 "traddr": "10.0.0.2", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "4420", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:12.955 "hdgst": false, 00:22:12.955 "ddgst": false 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 },{ 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme8", 00:22:12.955 "trtype": "tcp", 00:22:12.955 "traddr": "10.0.0.2", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "4420", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:12.955 "hdgst": false, 00:22:12.955 "ddgst": false 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 },{ 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme9", 00:22:12.955 "trtype": "tcp", 00:22:12.955 "traddr": "10.0.0.2", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "4420", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:12.955 "hdgst": false, 00:22:12.955 "ddgst": false 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 },{ 00:22:12.955 "params": { 00:22:12.955 "name": "Nvme10", 00:22:12.955 "trtype": "tcp", 00:22:12.955 "traddr": "10.0.0.2", 00:22:12.955 "adrfam": "ipv4", 00:22:12.955 "trsvcid": "4420", 00:22:12.955 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:12.955 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:12.955 "hdgst": false, 00:22:12.955 "ddgst": false 00:22:12.955 }, 00:22:12.955 "method": "bdev_nvme_attach_controller" 00:22:12.955 }' 00:22:12.955 [2024-05-15 06:59:27.118156] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:12.955 [2024-05-15 06:59:27.118265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:12.955 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.213 [2024-05-15 06:59:27.192892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.213 [2024-05-15 06:59:27.300640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.583 06:59:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:14.583 06:59:28 -- common/autotest_common.sh@852 -- # return 0 00:22:14.583 06:59:28 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:14.583 06:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.583 06:59:28 -- common/autotest_common.sh@10 -- # set +x 00:22:14.583 06:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.583 06:59:28 -- target/shutdown.sh@83 -- # kill -9 567609 00:22:14.583 06:59:28 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:14.583 06:59:28 -- target/shutdown.sh@87 -- # sleep 1 00:22:15.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 567609 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:15.954 06:59:29 -- target/shutdown.sh@88 -- # kill -0 567294 00:22:15.954 06:59:29 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:15.954 06:59:29 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:15.954 06:59:29 -- nvmf/common.sh@520 -- # config=() 00:22:15.954 06:59:29 -- nvmf/common.sh@520 -- # local subsystem config 00:22:15.954 06:59:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.954 { 00:22:15.954 "params": { 00:22:15.954 "name": "Nvme$subsystem", 00:22:15.954 "trtype": "$TEST_TRANSPORT", 00:22:15.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.954 "adrfam": "ipv4", 00:22:15.954 "trsvcid": "$NVMF_PORT", 00:22:15.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.954 "hdgst": ${hdgst:-false}, 00:22:15.954 "ddgst": ${ddgst:-false} 00:22:15.954 }, 00:22:15.954 "method": "bdev_nvme_attach_controller" 00:22:15.954 } 00:22:15.954 EOF 00:22:15.954 )") 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # cat 00:22:15.954 06:59:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.954 { 00:22:15.954 "params": { 00:22:15.954 "name": "Nvme$subsystem", 00:22:15.954 "trtype": "$TEST_TRANSPORT", 00:22:15.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.954 "adrfam": "ipv4", 00:22:15.954 "trsvcid": "$NVMF_PORT", 00:22:15.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.954 "hdgst": ${hdgst:-false}, 00:22:15.954 "ddgst": ${ddgst:-false} 00:22:15.954 }, 00:22:15.954 "method": "bdev_nvme_attach_controller" 00:22:15.954 } 00:22:15.954 EOF 00:22:15.954 )") 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # cat 00:22:15.954 06:59:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.954 { 00:22:15.954 "params": { 00:22:15.954 "name": "Nvme$subsystem", 00:22:15.954 "trtype": "$TEST_TRANSPORT", 00:22:15.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.954 "adrfam": "ipv4", 00:22:15.954 "trsvcid": "$NVMF_PORT", 00:22:15.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.954 "hdgst": ${hdgst:-false}, 00:22:15.954 "ddgst": ${ddgst:-false} 00:22:15.954 }, 00:22:15.954 "method": "bdev_nvme_attach_controller" 00:22:15.954 } 00:22:15.954 EOF 00:22:15.954 )") 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # cat 00:22:15.954 06:59:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.954 { 00:22:15.954 "params": { 00:22:15.954 "name": "Nvme$subsystem", 00:22:15.954 "trtype": "$TEST_TRANSPORT", 00:22:15.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.954 "adrfam": "ipv4", 00:22:15.954 "trsvcid": "$NVMF_PORT", 00:22:15.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.954 "hdgst": ${hdgst:-false}, 00:22:15.954 "ddgst": ${ddgst:-false} 00:22:15.954 }, 00:22:15.954 "method": "bdev_nvme_attach_controller" 00:22:15.954 } 00:22:15.954 EOF 00:22:15.954 )") 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # cat 00:22:15.954 06:59:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.954 { 00:22:15.954 "params": { 00:22:15.954 "name": "Nvme$subsystem", 00:22:15.954 "trtype": "$TEST_TRANSPORT", 00:22:15.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.954 "adrfam": "ipv4", 00:22:15.954 "trsvcid": "$NVMF_PORT", 00:22:15.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.954 "hdgst": ${hdgst:-false}, 00:22:15.954 "ddgst": ${ddgst:-false} 00:22:15.954 }, 00:22:15.954 "method": "bdev_nvme_attach_controller" 00:22:15.954 } 00:22:15.954 EOF 00:22:15.954 )") 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # cat 00:22:15.954 06:59:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.954 { 00:22:15.954 "params": { 00:22:15.954 "name": "Nvme$subsystem", 00:22:15.954 "trtype": "$TEST_TRANSPORT", 00:22:15.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.954 "adrfam": "ipv4", 00:22:15.954 "trsvcid": "$NVMF_PORT", 00:22:15.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.954 "hdgst": ${hdgst:-false}, 00:22:15.954 "ddgst": ${ddgst:-false} 00:22:15.954 }, 00:22:15.954 "method": "bdev_nvme_attach_controller" 00:22:15.954 } 00:22:15.954 EOF 00:22:15.954 )") 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # cat 00:22:15.954 06:59:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.954 { 00:22:15.954 "params": { 00:22:15.954 "name": "Nvme$subsystem", 00:22:15.954 "trtype": "$TEST_TRANSPORT", 00:22:15.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.954 "adrfam": "ipv4", 00:22:15.954 "trsvcid": "$NVMF_PORT", 00:22:15.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.954 "hdgst": ${hdgst:-false}, 00:22:15.954 "ddgst": ${ddgst:-false} 00:22:15.954 }, 00:22:15.954 "method": "bdev_nvme_attach_controller" 00:22:15.954 } 00:22:15.954 EOF 00:22:15.954 )") 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # cat 00:22:15.954 06:59:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.954 { 00:22:15.954 "params": { 00:22:15.954 "name": "Nvme$subsystem", 00:22:15.954 "trtype": "$TEST_TRANSPORT", 00:22:15.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.954 "adrfam": "ipv4", 00:22:15.954 "trsvcid": "$NVMF_PORT", 00:22:15.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.954 "hdgst": ${hdgst:-false}, 00:22:15.954 "ddgst": ${ddgst:-false} 00:22:15.954 }, 00:22:15.954 "method": "bdev_nvme_attach_controller" 00:22:15.954 } 00:22:15.954 EOF 00:22:15.954 )") 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # cat 00:22:15.954 06:59:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.954 { 00:22:15.954 "params": { 00:22:15.954 "name": "Nvme$subsystem", 00:22:15.954 "trtype": "$TEST_TRANSPORT", 00:22:15.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.954 "adrfam": "ipv4", 00:22:15.954 "trsvcid": "$NVMF_PORT", 00:22:15.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.954 "hdgst": ${hdgst:-false}, 00:22:15.954 "ddgst": ${ddgst:-false} 00:22:15.954 }, 00:22:15.954 "method": "bdev_nvme_attach_controller" 00:22:15.954 } 00:22:15.954 EOF 00:22:15.954 )") 00:22:15.954 06:59:29 -- nvmf/common.sh@542 -- # cat 00:22:15.955 06:59:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:15.955 06:59:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:15.955 { 00:22:15.955 "params": { 00:22:15.955 "name": "Nvme$subsystem", 00:22:15.955 "trtype": "$TEST_TRANSPORT", 00:22:15.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:15.955 "adrfam": "ipv4", 00:22:15.955 "trsvcid": "$NVMF_PORT", 00:22:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:15.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:15.955 "hdgst": ${hdgst:-false}, 00:22:15.955 "ddgst": ${ddgst:-false} 00:22:15.955 }, 00:22:15.955 "method": "bdev_nvme_attach_controller" 00:22:15.955 } 00:22:15.955 EOF 00:22:15.955 )") 00:22:15.955 06:59:29 -- nvmf/common.sh@542 -- # cat 00:22:15.955 06:59:29 -- nvmf/common.sh@544 -- # jq . 00:22:15.955 06:59:29 -- nvmf/common.sh@545 -- # IFS=, 00:22:15.955 06:59:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:15.955 "params": { 00:22:15.955 "name": "Nvme1", 00:22:15.955 "trtype": "tcp", 00:22:15.955 "traddr": "10.0.0.2", 00:22:15.955 "adrfam": "ipv4", 00:22:15.955 "trsvcid": "4420", 00:22:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.955 "hdgst": false, 00:22:15.955 "ddgst": false 00:22:15.955 }, 00:22:15.955 "method": "bdev_nvme_attach_controller" 00:22:15.955 },{ 00:22:15.955 "params": { 00:22:15.955 "name": "Nvme2", 00:22:15.955 "trtype": "tcp", 00:22:15.955 "traddr": "10.0.0.2", 00:22:15.955 "adrfam": "ipv4", 00:22:15.955 "trsvcid": "4420", 00:22:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:15.955 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:15.955 "hdgst": false, 00:22:15.955 "ddgst": false 00:22:15.955 }, 00:22:15.955 "method": "bdev_nvme_attach_controller" 00:22:15.955 },{ 00:22:15.955 "params": { 00:22:15.955 "name": "Nvme3", 00:22:15.955 "trtype": "tcp", 00:22:15.955 "traddr": "10.0.0.2", 00:22:15.955 "adrfam": "ipv4", 00:22:15.955 "trsvcid": "4420", 00:22:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:15.955 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:15.955 "hdgst": false, 00:22:15.955 "ddgst": false 00:22:15.955 }, 00:22:15.955 "method": "bdev_nvme_attach_controller" 00:22:15.955 },{ 00:22:15.955 "params": { 00:22:15.955 "name": "Nvme4", 00:22:15.955 "trtype": "tcp", 00:22:15.955 "traddr": "10.0.0.2", 00:22:15.955 "adrfam": "ipv4", 00:22:15.955 "trsvcid": "4420", 00:22:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:15.955 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:15.955 "hdgst": false, 00:22:15.955 "ddgst": false 00:22:15.955 }, 00:22:15.955 "method": "bdev_nvme_attach_controller" 00:22:15.955 },{ 00:22:15.955 "params": { 00:22:15.955 "name": "Nvme5", 00:22:15.955 "trtype": "tcp", 00:22:15.955 "traddr": "10.0.0.2", 00:22:15.955 "adrfam": "ipv4", 00:22:15.955 "trsvcid": "4420", 00:22:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:15.955 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:15.955 "hdgst": false, 00:22:15.955 "ddgst": false 00:22:15.955 }, 00:22:15.955 "method": "bdev_nvme_attach_controller" 00:22:15.955 },{ 00:22:15.955 "params": { 00:22:15.955 "name": "Nvme6", 00:22:15.955 "trtype": "tcp", 00:22:15.955 "traddr": "10.0.0.2", 00:22:15.955 "adrfam": "ipv4", 00:22:15.955 "trsvcid": "4420", 00:22:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:15.955 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:15.955 "hdgst": false, 00:22:15.955 "ddgst": false 00:22:15.955 }, 00:22:15.955 "method": "bdev_nvme_attach_controller" 00:22:15.955 },{ 00:22:15.955 "params": { 00:22:15.955 "name": "Nvme7", 00:22:15.955 "trtype": "tcp", 00:22:15.955 "traddr": "10.0.0.2", 00:22:15.955 "adrfam": "ipv4", 00:22:15.955 "trsvcid": "4420", 00:22:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:15.955 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:15.955 "hdgst": false, 00:22:15.955 "ddgst": false 00:22:15.955 }, 00:22:15.955 "method": "bdev_nvme_attach_controller" 00:22:15.955 },{ 00:22:15.955 "params": { 00:22:15.955 "name": "Nvme8", 00:22:15.955 "trtype": "tcp", 00:22:15.955 "traddr": "10.0.0.2", 00:22:15.955 "adrfam": "ipv4", 00:22:15.955 "trsvcid": "4420", 00:22:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:15.955 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:15.955 "hdgst": false, 00:22:15.955 "ddgst": false 00:22:15.955 }, 00:22:15.955 "method": "bdev_nvme_attach_controller" 00:22:15.955 },{ 00:22:15.955 "params": { 00:22:15.955 "name": "Nvme9", 00:22:15.955 "trtype": "tcp", 00:22:15.955 "traddr": "10.0.0.2", 00:22:15.955 "adrfam": "ipv4", 00:22:15.955 "trsvcid": "4420", 00:22:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:15.955 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:15.955 "hdgst": false, 00:22:15.955 "ddgst": false 00:22:15.955 }, 00:22:15.955 "method": "bdev_nvme_attach_controller" 00:22:15.955 },{ 00:22:15.955 "params": { 00:22:15.955 "name": "Nvme10", 00:22:15.955 "trtype": "tcp", 00:22:15.955 "traddr": "10.0.0.2", 00:22:15.955 "adrfam": "ipv4", 00:22:15.955 "trsvcid": "4420", 00:22:15.955 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:15.955 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:15.955 "hdgst": false, 00:22:15.955 "ddgst": false 00:22:15.955 }, 00:22:15.955 "method": "bdev_nvme_attach_controller" 00:22:15.955 }' 00:22:15.955 [2024-05-15 06:59:29.841277] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:15.955 [2024-05-15 06:59:29.841372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567917 ] 00:22:15.955 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.955 [2024-05-15 06:59:29.918312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.955 [2024-05-15 06:59:30.033594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.852 Running I/O for 1 seconds... 00:22:18.787 00:22:18.787 Latency(us) 00:22:18.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.787 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.787 Verification LBA range: start 0x0 length 0x400 00:22:18.787 Nvme1n1 : 1.09 366.19 22.89 0.00 0.00 169736.90 46020.84 127382.57 00:22:18.787 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.787 Verification LBA range: start 0x0 length 0x400 00:22:18.787 Nvme2n1 : 1.08 332.86 20.80 0.00 0.00 185015.27 10291.58 174762.67 00:22:18.787 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.787 Verification LBA range: start 0x0 length 0x400 00:22:18.787 Nvme3n1 : 1.07 331.65 20.73 0.00 0.00 184836.82 36505.98 152237.70 00:22:18.787 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.787 Verification LBA range: start 0x0 length 0x400 00:22:18.787 Nvme4n1 : 1.10 363.65 22.73 0.00 0.00 168505.18 10922.67 135926.52 00:22:18.787 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.787 Verification LBA range: start 0x0 length 0x400 00:22:18.787 Nvme5n1 : 1.09 365.18 22.82 0.00 0.00 166041.12 37671.06 127382.57 00:22:18.787 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.787 Verification LBA range: start 0x0 length 0x400 00:22:18.787 Nvme6n1 : 1.09 364.23 22.76 0.00 0.00 165334.66 37476.88 128936.01 00:22:18.787 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.787 Verification LBA range: start 0x0 length 0x400 00:22:18.787 Nvme7n1 : 1.09 363.53 22.72 0.00 0.00 164639.41 35340.89 131266.18 00:22:18.787 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.787 Verification LBA range: start 0x0 length 0x400 00:22:18.787 Nvme8n1 : 1.10 362.40 22.65 0.00 0.00 163934.89 35923.44 133596.35 00:22:18.787 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.787 Verification LBA range: start 0x0 length 0x400 00:22:18.787 Nvme9n1 : 1.11 359.95 22.50 0.00 0.00 164469.14 29709.65 147577.36 00:22:18.787 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:18.787 Verification LBA range: start 0x0 length 0x400 00:22:18.787 Nvme10n1 : 1.11 363.03 22.69 0.00 0.00 163550.89 11553.75 136703.24 00:22:18.787 =================================================================================================================== 00:22:18.787 Total : 3572.67 223.29 0.00 0.00 169278.87 10291.58 174762.67 00:22:19.045 06:59:33 -- target/shutdown.sh@93 -- # stoptarget 00:22:19.045 06:59:33 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:19.045 06:59:33 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:19.045 06:59:33 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:19.045 06:59:33 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:19.045 06:59:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:19.045 06:59:33 -- nvmf/common.sh@116 -- # sync 00:22:19.045 06:59:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:19.045 06:59:33 -- nvmf/common.sh@119 -- # set +e 00:22:19.045 06:59:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:19.045 06:59:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:19.045 rmmod nvme_tcp 00:22:19.045 rmmod nvme_fabrics 00:22:19.045 rmmod nvme_keyring 00:22:19.045 06:59:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:19.045 06:59:33 -- nvmf/common.sh@123 -- # set -e 00:22:19.045 06:59:33 -- nvmf/common.sh@124 -- # return 0 00:22:19.045 06:59:33 -- nvmf/common.sh@477 -- # '[' -n 567294 ']' 00:22:19.045 06:59:33 -- nvmf/common.sh@478 -- # killprocess 567294 00:22:19.045 06:59:33 -- common/autotest_common.sh@926 -- # '[' -z 567294 ']' 00:22:19.045 06:59:33 -- common/autotest_common.sh@930 -- # kill -0 567294 00:22:19.045 06:59:33 -- common/autotest_common.sh@931 -- # uname 00:22:19.045 06:59:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:19.045 06:59:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 567294 00:22:19.302 06:59:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:19.302 06:59:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:19.302 06:59:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 567294' 00:22:19.302 killing process with pid 567294 00:22:19.302 06:59:33 -- common/autotest_common.sh@945 -- # kill 567294 00:22:19.302 06:59:33 -- common/autotest_common.sh@950 -- # wait 567294 00:22:19.867 06:59:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:19.867 06:59:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:19.867 06:59:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:19.867 06:59:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.867 06:59:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:19.867 06:59:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.867 06:59:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.867 06:59:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.769 06:59:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:21.769 00:22:21.769 real 0m12.941s 00:22:21.769 user 0m37.571s 00:22:21.769 sys 0m3.578s 00:22:21.769 06:59:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:21.769 06:59:35 -- common/autotest_common.sh@10 -- # set +x 00:22:21.769 ************************************ 00:22:21.769 END TEST nvmf_shutdown_tc1 00:22:21.769 ************************************ 00:22:21.769 06:59:35 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:21.769 06:59:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:21.769 06:59:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:21.769 06:59:35 -- common/autotest_common.sh@10 -- # set +x 00:22:21.769 ************************************ 00:22:21.769 START TEST nvmf_shutdown_tc2 00:22:21.769 ************************************ 00:22:21.769 06:59:35 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:22:21.769 06:59:35 -- target/shutdown.sh@98 -- # starttarget 00:22:21.769 06:59:35 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:21.769 06:59:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:21.769 06:59:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.769 06:59:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:21.769 06:59:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:21.769 06:59:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:21.769 06:59:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.769 06:59:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.769 06:59:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.769 06:59:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:21.769 06:59:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:21.769 06:59:35 -- common/autotest_common.sh@10 -- # set +x 00:22:21.769 06:59:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:21.769 06:59:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:21.769 06:59:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:21.769 06:59:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:21.769 06:59:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:21.769 06:59:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:21.769 06:59:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:21.769 06:59:35 -- nvmf/common.sh@294 -- # net_devs=() 00:22:21.769 06:59:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:21.769 06:59:35 -- nvmf/common.sh@295 -- # e810=() 00:22:21.769 06:59:35 -- nvmf/common.sh@295 -- # local -ga e810 00:22:21.769 06:59:35 -- nvmf/common.sh@296 -- # x722=() 00:22:21.769 06:59:35 -- nvmf/common.sh@296 -- # local -ga x722 00:22:21.769 06:59:35 -- nvmf/common.sh@297 -- # mlx=() 00:22:21.769 06:59:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:21.769 06:59:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.769 06:59:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.769 06:59:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.769 06:59:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.769 06:59:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.769 06:59:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.769 06:59:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.769 06:59:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.769 06:59:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.769 06:59:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.769 06:59:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.769 06:59:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:21.769 06:59:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:21.769 06:59:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:21.769 06:59:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:21.769 06:59:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:21.769 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:21.769 06:59:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:21.769 06:59:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:21.769 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:21.769 06:59:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:21.769 06:59:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:21.769 06:59:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.769 06:59:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:21.769 06:59:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.769 06:59:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:21.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:21.769 06:59:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.769 06:59:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:21.769 06:59:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.769 06:59:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:21.769 06:59:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.769 06:59:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:21.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:21.769 06:59:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.769 06:59:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:21.769 06:59:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:21.769 06:59:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:21.769 06:59:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:21.769 06:59:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.769 06:59:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.769 06:59:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.769 06:59:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:21.769 06:59:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.769 06:59:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.769 06:59:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:21.769 06:59:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.769 06:59:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.769 06:59:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:21.769 06:59:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:21.769 06:59:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.769 06:59:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.769 06:59:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.769 06:59:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.769 06:59:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:21.769 06:59:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.027 06:59:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.027 06:59:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.027 06:59:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:22.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:22:22.027 00:22:22.027 --- 10.0.0.2 ping statistics --- 00:22:22.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.027 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:22:22.027 06:59:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:22:22.027 00:22:22.027 --- 10.0.0.1 ping statistics --- 00:22:22.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.027 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:22:22.027 06:59:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.027 06:59:36 -- nvmf/common.sh@410 -- # return 0 00:22:22.027 06:59:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:22.027 06:59:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.027 06:59:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:22.027 06:59:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:22.027 06:59:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.027 06:59:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:22.027 06:59:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:22.027 06:59:36 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:22.027 06:59:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:22.027 06:59:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:22.027 06:59:36 -- common/autotest_common.sh@10 -- # set +x 00:22:22.027 06:59:36 -- nvmf/common.sh@469 -- # nvmfpid=568831 00:22:22.027 06:59:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:22.027 06:59:36 -- nvmf/common.sh@470 -- # waitforlisten 568831 00:22:22.027 06:59:36 -- common/autotest_common.sh@819 -- # '[' -z 568831 ']' 00:22:22.027 06:59:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.027 06:59:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:22.027 06:59:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.027 06:59:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:22.027 06:59:36 -- common/autotest_common.sh@10 -- # set +x 00:22:22.027 [2024-05-15 06:59:36.097006] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:22.027 [2024-05-15 06:59:36.097083] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.027 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.027 [2024-05-15 06:59:36.180468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.285 [2024-05-15 06:59:36.301499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:22.285 [2024-05-15 06:59:36.301640] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.285 [2024-05-15 06:59:36.301655] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.285 [2024-05-15 06:59:36.301668] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.285 [2024-05-15 06:59:36.301718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.285 [2024-05-15 06:59:36.301775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.285 [2024-05-15 06:59:36.301840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:22.285 [2024-05-15 06:59:36.301843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.218 06:59:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:23.218 06:59:37 -- common/autotest_common.sh@852 -- # return 0 00:22:23.218 06:59:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:23.218 06:59:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:23.218 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:22:23.218 06:59:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.218 06:59:37 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.218 06:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.218 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:22:23.218 [2024-05-15 06:59:37.143603] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.218 06:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.218 06:59:37 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:23.218 06:59:37 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:23.218 06:59:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:23.218 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:22:23.218 06:59:37 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:23.218 06:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:23.218 06:59:37 -- target/shutdown.sh@28 -- # cat 00:22:23.218 06:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:23.218 06:59:37 -- target/shutdown.sh@28 -- # cat 00:22:23.218 06:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:23.218 06:59:37 -- target/shutdown.sh@28 -- # cat 00:22:23.218 06:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:23.218 06:59:37 -- target/shutdown.sh@28 -- # cat 00:22:23.218 06:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:23.218 06:59:37 -- target/shutdown.sh@28 -- # cat 00:22:23.218 06:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:23.218 06:59:37 -- target/shutdown.sh@28 -- # cat 00:22:23.218 06:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:23.218 06:59:37 -- target/shutdown.sh@28 -- # cat 00:22:23.218 06:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:23.218 06:59:37 -- target/shutdown.sh@28 -- # cat 00:22:23.218 06:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:23.218 06:59:37 -- target/shutdown.sh@28 -- # cat 00:22:23.218 06:59:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:23.218 06:59:37 -- target/shutdown.sh@28 -- # cat 00:22:23.218 06:59:37 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:23.218 06:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.218 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:22:23.218 Malloc1 00:22:23.218 [2024-05-15 06:59:37.218640] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.218 Malloc2 00:22:23.218 Malloc3 00:22:23.218 Malloc4 00:22:23.218 Malloc5 00:22:23.218 Malloc6 00:22:23.476 Malloc7 00:22:23.476 Malloc8 00:22:23.476 Malloc9 00:22:23.476 Malloc10 00:22:23.476 06:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.476 06:59:37 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:23.476 06:59:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:23.476 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:22:23.476 06:59:37 -- target/shutdown.sh@102 -- # perfpid=569026 00:22:23.476 06:59:37 -- target/shutdown.sh@103 -- # waitforlisten 569026 /var/tmp/bdevperf.sock 00:22:23.476 06:59:37 -- common/autotest_common.sh@819 -- # '[' -z 569026 ']' 00:22:23.476 06:59:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.476 06:59:37 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:23.476 06:59:37 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:23.476 06:59:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:23.476 06:59:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.476 06:59:37 -- nvmf/common.sh@520 -- # config=() 00:22:23.476 06:59:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:23.476 06:59:37 -- nvmf/common.sh@520 -- # local subsystem config 00:22:23.476 06:59:37 -- common/autotest_common.sh@10 -- # set +x 00:22:23.476 06:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:23.476 06:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:23.476 { 00:22:23.476 "params": { 00:22:23.476 "name": "Nvme$subsystem", 00:22:23.476 "trtype": "$TEST_TRANSPORT", 00:22:23.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.476 "adrfam": "ipv4", 00:22:23.476 "trsvcid": "$NVMF_PORT", 00:22:23.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.476 "hdgst": ${hdgst:-false}, 00:22:23.476 "ddgst": ${ddgst:-false} 00:22:23.476 }, 00:22:23.476 "method": "bdev_nvme_attach_controller" 00:22:23.476 } 00:22:23.476 EOF 00:22:23.476 )") 00:22:23.476 06:59:37 -- nvmf/common.sh@542 -- # cat 00:22:23.476 06:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:23.476 06:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:23.476 { 00:22:23.476 "params": { 00:22:23.476 "name": "Nvme$subsystem", 00:22:23.476 "trtype": "$TEST_TRANSPORT", 00:22:23.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.476 "adrfam": "ipv4", 00:22:23.476 "trsvcid": "$NVMF_PORT", 00:22:23.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.476 "hdgst": ${hdgst:-false}, 00:22:23.476 "ddgst": ${ddgst:-false} 00:22:23.476 }, 00:22:23.476 "method": "bdev_nvme_attach_controller" 00:22:23.476 } 00:22:23.476 EOF 00:22:23.476 )") 00:22:23.476 06:59:37 -- nvmf/common.sh@542 -- # cat 00:22:23.476 06:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:23.476 06:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:23.476 { 00:22:23.476 "params": { 00:22:23.476 "name": "Nvme$subsystem", 00:22:23.476 "trtype": "$TEST_TRANSPORT", 00:22:23.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.476 "adrfam": "ipv4", 00:22:23.476 "trsvcid": "$NVMF_PORT", 00:22:23.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.476 "hdgst": ${hdgst:-false}, 00:22:23.476 "ddgst": ${ddgst:-false} 00:22:23.476 }, 00:22:23.476 "method": "bdev_nvme_attach_controller" 00:22:23.476 } 00:22:23.476 EOF 00:22:23.476 )") 00:22:23.476 06:59:37 -- nvmf/common.sh@542 -- # cat 00:22:23.476 06:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:23.476 06:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:23.476 { 00:22:23.476 "params": { 00:22:23.476 "name": "Nvme$subsystem", 00:22:23.476 "trtype": "$TEST_TRANSPORT", 00:22:23.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.476 "adrfam": "ipv4", 00:22:23.476 "trsvcid": "$NVMF_PORT", 00:22:23.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.476 "hdgst": ${hdgst:-false}, 00:22:23.476 "ddgst": ${ddgst:-false} 00:22:23.476 }, 00:22:23.476 "method": "bdev_nvme_attach_controller" 00:22:23.476 } 00:22:23.476 EOF 00:22:23.476 )") 00:22:23.476 06:59:37 -- nvmf/common.sh@542 -- # cat 00:22:23.476 06:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:23.476 06:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:23.476 { 00:22:23.476 "params": { 00:22:23.476 "name": "Nvme$subsystem", 00:22:23.476 "trtype": "$TEST_TRANSPORT", 00:22:23.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "$NVMF_PORT", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.477 "hdgst": ${hdgst:-false}, 00:22:23.477 "ddgst": ${ddgst:-false} 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 } 00:22:23.477 EOF 00:22:23.477 )") 00:22:23.477 06:59:37 -- nvmf/common.sh@542 -- # cat 00:22:23.477 06:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:23.477 06:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:23.477 { 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme$subsystem", 00:22:23.477 "trtype": "$TEST_TRANSPORT", 00:22:23.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "$NVMF_PORT", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.477 "hdgst": ${hdgst:-false}, 00:22:23.477 "ddgst": ${ddgst:-false} 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 } 00:22:23.477 EOF 00:22:23.477 )") 00:22:23.477 06:59:37 -- nvmf/common.sh@542 -- # cat 00:22:23.477 06:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:23.477 06:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:23.477 { 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme$subsystem", 00:22:23.477 "trtype": "$TEST_TRANSPORT", 00:22:23.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "$NVMF_PORT", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.477 "hdgst": ${hdgst:-false}, 00:22:23.477 "ddgst": ${ddgst:-false} 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 } 00:22:23.477 EOF 00:22:23.477 )") 00:22:23.477 06:59:37 -- nvmf/common.sh@542 -- # cat 00:22:23.477 06:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:23.477 06:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:23.477 { 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme$subsystem", 00:22:23.477 "trtype": "$TEST_TRANSPORT", 00:22:23.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "$NVMF_PORT", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.477 "hdgst": ${hdgst:-false}, 00:22:23.477 "ddgst": ${ddgst:-false} 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 } 00:22:23.477 EOF 00:22:23.477 )") 00:22:23.477 06:59:37 -- nvmf/common.sh@542 -- # cat 00:22:23.477 06:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:23.477 06:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:23.477 { 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme$subsystem", 00:22:23.477 "trtype": "$TEST_TRANSPORT", 00:22:23.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "$NVMF_PORT", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.477 "hdgst": ${hdgst:-false}, 00:22:23.477 "ddgst": ${ddgst:-false} 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 } 00:22:23.477 EOF 00:22:23.477 )") 00:22:23.477 06:59:37 -- nvmf/common.sh@542 -- # cat 00:22:23.477 06:59:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:23.477 06:59:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:23.477 { 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme$subsystem", 00:22:23.477 "trtype": "$TEST_TRANSPORT", 00:22:23.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "$NVMF_PORT", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.477 "hdgst": ${hdgst:-false}, 00:22:23.477 "ddgst": ${ddgst:-false} 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 } 00:22:23.477 EOF 00:22:23.477 )") 00:22:23.477 06:59:37 -- nvmf/common.sh@542 -- # cat 00:22:23.477 06:59:37 -- nvmf/common.sh@544 -- # jq . 00:22:23.477 06:59:37 -- nvmf/common.sh@545 -- # IFS=, 00:22:23.477 06:59:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme1", 00:22:23.477 "trtype": "tcp", 00:22:23.477 "traddr": "10.0.0.2", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "4420", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.477 "hdgst": false, 00:22:23.477 "ddgst": false 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 },{ 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme2", 00:22:23.477 "trtype": "tcp", 00:22:23.477 "traddr": "10.0.0.2", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "4420", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:23.477 "hdgst": false, 00:22:23.477 "ddgst": false 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 },{ 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme3", 00:22:23.477 "trtype": "tcp", 00:22:23.477 "traddr": "10.0.0.2", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "4420", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:23.477 "hdgst": false, 00:22:23.477 "ddgst": false 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 },{ 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme4", 00:22:23.477 "trtype": "tcp", 00:22:23.477 "traddr": "10.0.0.2", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "4420", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:23.477 "hdgst": false, 00:22:23.477 "ddgst": false 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 },{ 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme5", 00:22:23.477 "trtype": "tcp", 00:22:23.477 "traddr": "10.0.0.2", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "4420", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:23.477 "hdgst": false, 00:22:23.477 "ddgst": false 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 },{ 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme6", 00:22:23.477 "trtype": "tcp", 00:22:23.477 "traddr": "10.0.0.2", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "4420", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:23.477 "hdgst": false, 00:22:23.477 "ddgst": false 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 },{ 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme7", 00:22:23.477 "trtype": "tcp", 00:22:23.477 "traddr": "10.0.0.2", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "4420", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:23.477 "hdgst": false, 00:22:23.477 "ddgst": false 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 },{ 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme8", 00:22:23.477 "trtype": "tcp", 00:22:23.477 "traddr": "10.0.0.2", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "4420", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:23.477 "hdgst": false, 00:22:23.477 "ddgst": false 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 },{ 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme9", 00:22:23.477 "trtype": "tcp", 00:22:23.477 "traddr": "10.0.0.2", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "4420", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:23.477 "hdgst": false, 00:22:23.477 "ddgst": false 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 },{ 00:22:23.477 "params": { 00:22:23.477 "name": "Nvme10", 00:22:23.477 "trtype": "tcp", 00:22:23.477 "traddr": "10.0.0.2", 00:22:23.477 "adrfam": "ipv4", 00:22:23.477 "trsvcid": "4420", 00:22:23.477 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:23.477 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:23.477 "hdgst": false, 00:22:23.477 "ddgst": false 00:22:23.477 }, 00:22:23.477 "method": "bdev_nvme_attach_controller" 00:22:23.477 }' 00:22:23.735 [2024-05-15 06:59:37.712518] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:23.735 [2024-05-15 06:59:37.712601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569026 ] 00:22:23.735 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.735 [2024-05-15 06:59:37.787685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.735 [2024-05-15 06:59:37.895290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.655 Running I/O for 10 seconds... 00:22:26.219 06:59:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:26.219 06:59:40 -- common/autotest_common.sh@852 -- # return 0 00:22:26.219 06:59:40 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:26.219 06:59:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.219 06:59:40 -- common/autotest_common.sh@10 -- # set +x 00:22:26.219 06:59:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.219 06:59:40 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:26.219 06:59:40 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:26.219 06:59:40 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:26.219 06:59:40 -- target/shutdown.sh@57 -- # local ret=1 00:22:26.219 06:59:40 -- target/shutdown.sh@58 -- # local i 00:22:26.219 06:59:40 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:26.219 06:59:40 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:26.219 06:59:40 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:26.219 06:59:40 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:26.219 06:59:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.219 06:59:40 -- common/autotest_common.sh@10 -- # set +x 00:22:26.219 06:59:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.219 06:59:40 -- target/shutdown.sh@60 -- # read_io_count=129 00:22:26.220 06:59:40 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:22:26.220 06:59:40 -- target/shutdown.sh@64 -- # ret=0 00:22:26.220 06:59:40 -- target/shutdown.sh@65 -- # break 00:22:26.220 06:59:40 -- target/shutdown.sh@69 -- # return 0 00:22:26.220 06:59:40 -- target/shutdown.sh@109 -- # killprocess 569026 00:22:26.220 06:59:40 -- common/autotest_common.sh@926 -- # '[' -z 569026 ']' 00:22:26.220 06:59:40 -- common/autotest_common.sh@930 -- # kill -0 569026 00:22:26.220 06:59:40 -- common/autotest_common.sh@931 -- # uname 00:22:26.220 06:59:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:26.220 06:59:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 569026 00:22:26.220 06:59:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:26.220 06:59:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:26.220 06:59:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 569026' 00:22:26.220 killing process with pid 569026 00:22:26.220 06:59:40 -- common/autotest_common.sh@945 -- # kill 569026 00:22:26.220 06:59:40 -- common/autotest_common.sh@950 -- # wait 569026 00:22:26.220 Received shutdown signal, test time was about 0.583553 seconds 00:22:26.220 00:22:26.220 Latency(us) 00:22:26.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.220 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.220 Verification LBA range: start 0x0 length 0x400 00:22:26.220 Nvme1n1 : 0.58 391.26 24.45 0.00 0.00 158198.54 20874.43 159228.21 00:22:26.220 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.220 Verification LBA range: start 0x0 length 0x400 00:22:26.220 Nvme2n1 : 0.58 396.45 24.78 0.00 0.00 154026.21 19612.25 170102.33 00:22:26.220 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.220 Verification LBA range: start 0x0 length 0x400 00:22:26.220 Nvme3n1 : 0.57 399.46 24.97 0.00 0.00 150277.16 22913.33 120392.06 00:22:26.220 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.220 Verification LBA range: start 0x0 length 0x400 00:22:26.220 Nvme4n1 : 0.54 349.44 21.84 0.00 0.00 168794.93 24175.50 146800.64 00:22:26.220 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.220 Verification LBA range: start 0x0 length 0x400 00:22:26.220 Nvme5n1 : 0.57 398.53 24.91 0.00 0.00 146376.93 22524.97 121168.78 00:22:26.220 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.220 Verification LBA range: start 0x0 length 0x400 00:22:26.220 Nvme6n1 : 0.57 397.59 24.85 0.00 0.00 144998.93 19126.80 132819.63 00:22:26.220 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.220 Verification LBA range: start 0x0 length 0x400 00:22:26.220 Nvme7n1 : 0.58 393.94 24.62 0.00 0.00 144091.05 19418.07 126605.84 00:22:26.220 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.220 Verification LBA range: start 0x0 length 0x400 00:22:26.220 Nvme8n1 : 0.55 344.47 21.53 0.00 0.00 160904.80 22039.51 132042.90 00:22:26.220 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.220 Verification LBA range: start 0x0 length 0x400 00:22:26.220 Nvme9n1 : 0.58 393.04 24.57 0.00 0.00 140426.74 20486.07 123498.95 00:22:26.220 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.220 Verification LBA range: start 0x0 length 0x400 00:22:26.220 Nvme10n1 : 0.57 339.08 21.19 0.00 0.00 158195.99 12330.48 130489.46 00:22:26.220 =================================================================================================================== 00:22:26.220 Total : 3803.26 237.70 0.00 0.00 152108.29 12330.48 170102.33 00:22:26.476 06:59:40 -- target/shutdown.sh@112 -- # sleep 1 00:22:27.407 06:59:41 -- target/shutdown.sh@113 -- # kill -0 568831 00:22:27.665 06:59:41 -- target/shutdown.sh@115 -- # stoptarget 00:22:27.665 06:59:41 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:27.665 06:59:41 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:27.665 06:59:41 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:27.665 06:59:41 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:27.665 06:59:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:27.665 06:59:41 -- nvmf/common.sh@116 -- # sync 00:22:27.665 06:59:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:27.665 06:59:41 -- nvmf/common.sh@119 -- # set +e 00:22:27.665 06:59:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:27.665 06:59:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:27.665 rmmod nvme_tcp 00:22:27.665 rmmod nvme_fabrics 00:22:27.665 rmmod nvme_keyring 00:22:27.665 06:59:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:27.665 06:59:41 -- nvmf/common.sh@123 -- # set -e 00:22:27.665 06:59:41 -- nvmf/common.sh@124 -- # return 0 00:22:27.665 06:59:41 -- nvmf/common.sh@477 -- # '[' -n 568831 ']' 00:22:27.665 06:59:41 -- nvmf/common.sh@478 -- # killprocess 568831 00:22:27.665 06:59:41 -- common/autotest_common.sh@926 -- # '[' -z 568831 ']' 00:22:27.665 06:59:41 -- common/autotest_common.sh@930 -- # kill -0 568831 00:22:27.665 06:59:41 -- common/autotest_common.sh@931 -- # uname 00:22:27.665 06:59:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:27.665 06:59:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 568831 00:22:27.665 06:59:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:27.665 06:59:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:27.665 06:59:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 568831' 00:22:27.665 killing process with pid 568831 00:22:27.665 06:59:41 -- common/autotest_common.sh@945 -- # kill 568831 00:22:27.665 06:59:41 -- common/autotest_common.sh@950 -- # wait 568831 00:22:28.231 06:59:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:28.231 06:59:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:28.231 06:59:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:28.231 06:59:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.231 06:59:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:28.231 06:59:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.231 06:59:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.231 06:59:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.131 06:59:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:30.131 00:22:30.131 real 0m8.408s 00:22:30.131 user 0m26.830s 00:22:30.131 sys 0m1.453s 00:22:30.131 06:59:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.131 06:59:44 -- common/autotest_common.sh@10 -- # set +x 00:22:30.131 ************************************ 00:22:30.131 END TEST nvmf_shutdown_tc2 00:22:30.131 ************************************ 00:22:30.131 06:59:44 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:30.131 06:59:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:30.131 06:59:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:30.131 06:59:44 -- common/autotest_common.sh@10 -- # set +x 00:22:30.131 ************************************ 00:22:30.131 START TEST nvmf_shutdown_tc3 00:22:30.131 ************************************ 00:22:30.131 06:59:44 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:22:30.131 06:59:44 -- target/shutdown.sh@120 -- # starttarget 00:22:30.131 06:59:44 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:30.131 06:59:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:30.131 06:59:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.131 06:59:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:30.131 06:59:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:30.131 06:59:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:30.131 06:59:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.131 06:59:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.131 06:59:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.131 06:59:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:30.131 06:59:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:30.131 06:59:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:30.131 06:59:44 -- common/autotest_common.sh@10 -- # set +x 00:22:30.131 06:59:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:30.131 06:59:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:30.131 06:59:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:30.131 06:59:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:30.131 06:59:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:30.131 06:59:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:30.131 06:59:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:30.131 06:59:44 -- nvmf/common.sh@294 -- # net_devs=() 00:22:30.131 06:59:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:30.131 06:59:44 -- nvmf/common.sh@295 -- # e810=() 00:22:30.131 06:59:44 -- nvmf/common.sh@295 -- # local -ga e810 00:22:30.131 06:59:44 -- nvmf/common.sh@296 -- # x722=() 00:22:30.131 06:59:44 -- nvmf/common.sh@296 -- # local -ga x722 00:22:30.131 06:59:44 -- nvmf/common.sh@297 -- # mlx=() 00:22:30.131 06:59:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:30.131 06:59:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.131 06:59:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.131 06:59:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.131 06:59:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.131 06:59:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.131 06:59:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.131 06:59:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.131 06:59:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.131 06:59:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.132 06:59:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.132 06:59:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.132 06:59:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:30.132 06:59:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:30.132 06:59:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:30.132 06:59:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:30.132 06:59:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:30.132 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:30.132 06:59:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:30.132 06:59:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:30.132 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:30.132 06:59:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:30.132 06:59:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:30.132 06:59:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.132 06:59:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:30.132 06:59:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.132 06:59:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:30.132 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:30.132 06:59:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.132 06:59:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:30.132 06:59:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.132 06:59:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:30.132 06:59:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.132 06:59:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:30.132 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:30.132 06:59:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.132 06:59:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:30.132 06:59:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:30.132 06:59:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:30.132 06:59:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:30.132 06:59:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.132 06:59:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.132 06:59:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.132 06:59:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:30.132 06:59:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.132 06:59:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.132 06:59:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:30.132 06:59:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.132 06:59:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.132 06:59:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:30.132 06:59:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:30.132 06:59:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.132 06:59:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.390 06:59:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.390 06:59:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.390 06:59:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:30.390 06:59:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.390 06:59:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.390 06:59:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.390 06:59:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:30.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:22:30.390 00:22:30.390 --- 10.0.0.2 ping statistics --- 00:22:30.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.390 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:22:30.390 06:59:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:22:30.390 00:22:30.390 --- 10.0.0.1 ping statistics --- 00:22:30.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.390 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:22:30.390 06:59:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.390 06:59:44 -- nvmf/common.sh@410 -- # return 0 00:22:30.390 06:59:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:30.390 06:59:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.390 06:59:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:30.390 06:59:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:30.390 06:59:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.390 06:59:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:30.390 06:59:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:30.390 06:59:44 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:30.390 06:59:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:30.390 06:59:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:30.390 06:59:44 -- common/autotest_common.sh@10 -- # set +x 00:22:30.390 06:59:44 -- nvmf/common.sh@469 -- # nvmfpid=569959 00:22:30.390 06:59:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:30.390 06:59:44 -- nvmf/common.sh@470 -- # waitforlisten 569959 00:22:30.390 06:59:44 -- common/autotest_common.sh@819 -- # '[' -z 569959 ']' 00:22:30.390 06:59:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.390 06:59:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:30.390 06:59:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.390 06:59:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:30.390 06:59:44 -- common/autotest_common.sh@10 -- # set +x 00:22:30.390 [2024-05-15 06:59:44.550584] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:30.390 [2024-05-15 06:59:44.550665] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.390 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.648 [2024-05-15 06:59:44.630759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.648 [2024-05-15 06:59:44.746307] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:30.648 [2024-05-15 06:59:44.746470] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.648 [2024-05-15 06:59:44.746490] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.648 [2024-05-15 06:59:44.746504] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.648 [2024-05-15 06:59:44.746587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.648 [2024-05-15 06:59:44.746713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.648 [2024-05-15 06:59:44.746834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:30.648 [2024-05-15 06:59:44.746837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.581 06:59:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:31.581 06:59:45 -- common/autotest_common.sh@852 -- # return 0 00:22:31.581 06:59:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:31.581 06:59:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:31.581 06:59:45 -- common/autotest_common.sh@10 -- # set +x 00:22:31.581 06:59:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.581 06:59:45 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.581 06:59:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:31.581 06:59:45 -- common/autotest_common.sh@10 -- # set +x 00:22:31.581 [2024-05-15 06:59:45.528450] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.581 06:59:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:31.581 06:59:45 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:31.581 06:59:45 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:31.581 06:59:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:31.581 06:59:45 -- common/autotest_common.sh@10 -- # set +x 00:22:31.581 06:59:45 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.581 06:59:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.581 06:59:45 -- target/shutdown.sh@28 -- # cat 00:22:31.581 06:59:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.581 06:59:45 -- target/shutdown.sh@28 -- # cat 00:22:31.581 06:59:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.581 06:59:45 -- target/shutdown.sh@28 -- # cat 00:22:31.581 06:59:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.581 06:59:45 -- target/shutdown.sh@28 -- # cat 00:22:31.581 06:59:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.581 06:59:45 -- target/shutdown.sh@28 -- # cat 00:22:31.581 06:59:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.581 06:59:45 -- target/shutdown.sh@28 -- # cat 00:22:31.581 06:59:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.581 06:59:45 -- target/shutdown.sh@28 -- # cat 00:22:31.581 06:59:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.581 06:59:45 -- target/shutdown.sh@28 -- # cat 00:22:31.581 06:59:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.581 06:59:45 -- target/shutdown.sh@28 -- # cat 00:22:31.581 06:59:45 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.581 06:59:45 -- target/shutdown.sh@28 -- # cat 00:22:31.581 06:59:45 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:31.581 06:59:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:31.581 06:59:45 -- common/autotest_common.sh@10 -- # set +x 00:22:31.581 Malloc1 00:22:31.581 [2024-05-15 06:59:45.603760] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.581 Malloc2 00:22:31.581 Malloc3 00:22:31.581 Malloc4 00:22:31.581 Malloc5 00:22:31.581 Malloc6 00:22:31.838 Malloc7 00:22:31.838 Malloc8 00:22:31.839 Malloc9 00:22:31.839 Malloc10 00:22:31.839 06:59:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:31.839 06:59:46 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:31.839 06:59:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:31.839 06:59:46 -- common/autotest_common.sh@10 -- # set +x 00:22:31.839 06:59:46 -- target/shutdown.sh@124 -- # perfpid=570273 00:22:31.839 06:59:46 -- target/shutdown.sh@125 -- # waitforlisten 570273 /var/tmp/bdevperf.sock 00:22:31.839 06:59:46 -- common/autotest_common.sh@819 -- # '[' -z 570273 ']' 00:22:31.839 06:59:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.839 06:59:46 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:31.839 06:59:46 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:31.839 06:59:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:31.839 06:59:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.839 06:59:46 -- nvmf/common.sh@520 -- # config=() 00:22:31.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.839 06:59:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:31.839 06:59:46 -- nvmf/common.sh@520 -- # local subsystem config 00:22:31.839 06:59:46 -- common/autotest_common.sh@10 -- # set +x 00:22:31.839 06:59:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:31.839 06:59:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:31.839 { 00:22:31.839 "params": { 00:22:31.839 "name": "Nvme$subsystem", 00:22:31.839 "trtype": "$TEST_TRANSPORT", 00:22:31.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.839 "adrfam": "ipv4", 00:22:31.839 "trsvcid": "$NVMF_PORT", 00:22:31.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.839 "hdgst": ${hdgst:-false}, 00:22:31.839 "ddgst": ${ddgst:-false} 00:22:31.839 }, 00:22:31.839 "method": "bdev_nvme_attach_controller" 00:22:31.839 } 00:22:31.839 EOF 00:22:31.839 )") 00:22:31.839 06:59:46 -- nvmf/common.sh@542 -- # cat 00:22:31.839 06:59:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:31.839 06:59:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:31.839 { 00:22:31.839 "params": { 00:22:31.839 "name": "Nvme$subsystem", 00:22:31.839 "trtype": "$TEST_TRANSPORT", 00:22:31.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.839 "adrfam": "ipv4", 00:22:31.839 "trsvcid": "$NVMF_PORT", 00:22:31.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.839 "hdgst": ${hdgst:-false}, 00:22:31.839 "ddgst": ${ddgst:-false} 00:22:31.839 }, 00:22:31.839 "method": "bdev_nvme_attach_controller" 00:22:31.839 } 00:22:31.839 EOF 00:22:31.839 )") 00:22:31.839 06:59:46 -- nvmf/common.sh@542 -- # cat 00:22:31.839 06:59:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:31.839 06:59:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:31.839 { 00:22:31.839 "params": { 00:22:31.839 "name": "Nvme$subsystem", 00:22:31.839 "trtype": "$TEST_TRANSPORT", 00:22:31.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.839 "adrfam": "ipv4", 00:22:31.839 "trsvcid": "$NVMF_PORT", 00:22:31.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.839 "hdgst": ${hdgst:-false}, 00:22:31.839 "ddgst": ${ddgst:-false} 00:22:31.839 }, 00:22:31.839 "method": "bdev_nvme_attach_controller" 00:22:31.839 } 00:22:31.839 EOF 00:22:31.839 )") 00:22:31.839 06:59:46 -- nvmf/common.sh@542 -- # cat 00:22:31.839 06:59:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:31.839 06:59:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:31.839 { 00:22:31.839 "params": { 00:22:31.839 "name": "Nvme$subsystem", 00:22:31.839 "trtype": "$TEST_TRANSPORT", 00:22:31.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.839 "adrfam": "ipv4", 00:22:31.839 "trsvcid": "$NVMF_PORT", 00:22:31.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.839 "hdgst": ${hdgst:-false}, 00:22:31.839 "ddgst": ${ddgst:-false} 00:22:31.839 }, 00:22:31.839 "method": "bdev_nvme_attach_controller" 00:22:31.839 } 00:22:31.839 EOF 00:22:31.839 )") 00:22:31.839 06:59:46 -- nvmf/common.sh@542 -- # cat 00:22:32.097 06:59:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:32.097 06:59:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:32.097 { 00:22:32.097 "params": { 00:22:32.097 "name": "Nvme$subsystem", 00:22:32.097 "trtype": "$TEST_TRANSPORT", 00:22:32.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.097 "adrfam": "ipv4", 00:22:32.097 "trsvcid": "$NVMF_PORT", 00:22:32.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.097 "hdgst": ${hdgst:-false}, 00:22:32.097 "ddgst": ${ddgst:-false} 00:22:32.097 }, 00:22:32.097 "method": "bdev_nvme_attach_controller" 00:22:32.097 } 00:22:32.097 EOF 00:22:32.097 )") 00:22:32.097 06:59:46 -- nvmf/common.sh@542 -- # cat 00:22:32.097 06:59:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:32.097 06:59:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:32.097 { 00:22:32.097 "params": { 00:22:32.097 "name": "Nvme$subsystem", 00:22:32.097 "trtype": "$TEST_TRANSPORT", 00:22:32.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.097 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "$NVMF_PORT", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.098 "hdgst": ${hdgst:-false}, 00:22:32.098 "ddgst": ${ddgst:-false} 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 } 00:22:32.098 EOF 00:22:32.098 )") 00:22:32.098 06:59:46 -- nvmf/common.sh@542 -- # cat 00:22:32.098 06:59:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:32.098 06:59:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:32.098 { 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme$subsystem", 00:22:32.098 "trtype": "$TEST_TRANSPORT", 00:22:32.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "$NVMF_PORT", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.098 "hdgst": ${hdgst:-false}, 00:22:32.098 "ddgst": ${ddgst:-false} 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 } 00:22:32.098 EOF 00:22:32.098 )") 00:22:32.098 06:59:46 -- nvmf/common.sh@542 -- # cat 00:22:32.098 06:59:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:32.098 06:59:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:32.098 { 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme$subsystem", 00:22:32.098 "trtype": "$TEST_TRANSPORT", 00:22:32.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "$NVMF_PORT", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.098 "hdgst": ${hdgst:-false}, 00:22:32.098 "ddgst": ${ddgst:-false} 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 } 00:22:32.098 EOF 00:22:32.098 )") 00:22:32.098 06:59:46 -- nvmf/common.sh@542 -- # cat 00:22:32.098 06:59:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:32.098 06:59:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:32.098 { 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme$subsystem", 00:22:32.098 "trtype": "$TEST_TRANSPORT", 00:22:32.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "$NVMF_PORT", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.098 "hdgst": ${hdgst:-false}, 00:22:32.098 "ddgst": ${ddgst:-false} 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 } 00:22:32.098 EOF 00:22:32.098 )") 00:22:32.098 06:59:46 -- nvmf/common.sh@542 -- # cat 00:22:32.098 06:59:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:22:32.098 06:59:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:22:32.098 { 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme$subsystem", 00:22:32.098 "trtype": "$TEST_TRANSPORT", 00:22:32.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "$NVMF_PORT", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:32.098 "hdgst": ${hdgst:-false}, 00:22:32.098 "ddgst": ${ddgst:-false} 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 } 00:22:32.098 EOF 00:22:32.098 )") 00:22:32.098 06:59:46 -- nvmf/common.sh@542 -- # cat 00:22:32.098 06:59:46 -- nvmf/common.sh@544 -- # jq . 00:22:32.098 06:59:46 -- nvmf/common.sh@545 -- # IFS=, 00:22:32.098 06:59:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme1", 00:22:32.098 "trtype": "tcp", 00:22:32.098 "traddr": "10.0.0.2", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "4420", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.098 "hdgst": false, 00:22:32.098 "ddgst": false 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 },{ 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme2", 00:22:32.098 "trtype": "tcp", 00:22:32.098 "traddr": "10.0.0.2", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "4420", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:32.098 "hdgst": false, 00:22:32.098 "ddgst": false 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 },{ 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme3", 00:22:32.098 "trtype": "tcp", 00:22:32.098 "traddr": "10.0.0.2", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "4420", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:32.098 "hdgst": false, 00:22:32.098 "ddgst": false 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 },{ 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme4", 00:22:32.098 "trtype": "tcp", 00:22:32.098 "traddr": "10.0.0.2", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "4420", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:32.098 "hdgst": false, 00:22:32.098 "ddgst": false 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 },{ 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme5", 00:22:32.098 "trtype": "tcp", 00:22:32.098 "traddr": "10.0.0.2", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "4420", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:32.098 "hdgst": false, 00:22:32.098 "ddgst": false 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 },{ 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme6", 00:22:32.098 "trtype": "tcp", 00:22:32.098 "traddr": "10.0.0.2", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "4420", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:32.098 "hdgst": false, 00:22:32.098 "ddgst": false 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 },{ 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme7", 00:22:32.098 "trtype": "tcp", 00:22:32.098 "traddr": "10.0.0.2", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "4420", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:32.098 "hdgst": false, 00:22:32.098 "ddgst": false 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 },{ 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme8", 00:22:32.098 "trtype": "tcp", 00:22:32.098 "traddr": "10.0.0.2", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "4420", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:32.098 "hdgst": false, 00:22:32.098 "ddgst": false 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 },{ 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme9", 00:22:32.098 "trtype": "tcp", 00:22:32.098 "traddr": "10.0.0.2", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "4420", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:32.098 "hdgst": false, 00:22:32.098 "ddgst": false 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 },{ 00:22:32.098 "params": { 00:22:32.098 "name": "Nvme10", 00:22:32.098 "trtype": "tcp", 00:22:32.098 "traddr": "10.0.0.2", 00:22:32.098 "adrfam": "ipv4", 00:22:32.098 "trsvcid": "4420", 00:22:32.098 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:32.098 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:32.098 "hdgst": false, 00:22:32.098 "ddgst": false 00:22:32.098 }, 00:22:32.098 "method": "bdev_nvme_attach_controller" 00:22:32.098 }' 00:22:32.098 [2024-05-15 06:59:46.099251] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:32.098 [2024-05-15 06:59:46.099348] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid570273 ] 00:22:32.098 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.098 [2024-05-15 06:59:46.175293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.098 [2024-05-15 06:59:46.283731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.471 Running I/O for 10 seconds... 00:22:33.731 06:59:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:33.731 06:59:47 -- common/autotest_common.sh@852 -- # return 0 00:22:33.731 06:59:47 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:33.731 06:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.731 06:59:47 -- common/autotest_common.sh@10 -- # set +x 00:22:33.731 06:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.731 06:59:47 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.731 06:59:47 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:33.731 06:59:47 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:33.731 06:59:47 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:33.731 06:59:47 -- target/shutdown.sh@57 -- # local ret=1 00:22:33.731 06:59:47 -- target/shutdown.sh@58 -- # local i 00:22:33.731 06:59:47 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:33.731 06:59:47 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:33.731 06:59:47 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:33.731 06:59:47 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:33.731 06:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.731 06:59:47 -- common/autotest_common.sh@10 -- # set +x 00:22:33.731 06:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.731 06:59:47 -- target/shutdown.sh@60 -- # read_io_count=42 00:22:33.731 06:59:47 -- target/shutdown.sh@63 -- # '[' 42 -ge 100 ']' 00:22:33.731 06:59:47 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:33.992 06:59:48 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:33.992 06:59:48 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:33.992 06:59:48 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:33.992 06:59:48 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:33.992 06:59:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.992 06:59:48 -- common/autotest_common.sh@10 -- # set +x 00:22:33.992 06:59:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.992 06:59:48 -- target/shutdown.sh@60 -- # read_io_count=129 00:22:33.992 06:59:48 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:22:33.992 06:59:48 -- target/shutdown.sh@64 -- # ret=0 00:22:33.992 06:59:48 -- target/shutdown.sh@65 -- # break 00:22:33.992 06:59:48 -- target/shutdown.sh@69 -- # return 0 00:22:33.992 06:59:48 -- target/shutdown.sh@134 -- # killprocess 569959 00:22:33.992 06:59:48 -- common/autotest_common.sh@926 -- # '[' -z 569959 ']' 00:22:33.992 06:59:48 -- common/autotest_common.sh@930 -- # kill -0 569959 00:22:33.992 06:59:48 -- common/autotest_common.sh@931 -- # uname 00:22:33.992 06:59:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:33.992 06:59:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 569959 00:22:33.992 06:59:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:33.992 06:59:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:33.992 06:59:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 569959' 00:22:33.992 killing process with pid 569959 00:22:33.992 06:59:48 -- common/autotest_common.sh@945 -- # kill 569959 00:22:33.992 06:59:48 -- common/autotest_common.sh@950 -- # wait 569959 00:22:33.992 [2024-05-15 06:59:48.216582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.992 [2024-05-15 06:59:48.216696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.992 [2024-05-15 06:59:48.216714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.992 [2024-05-15 06:59:48.216727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.992 [2024-05-15 06:59:48.216739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.992 [2024-05-15 06:59:48.216761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.992 [2024-05-15 06:59:48.216774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.992 [2024-05-15 06:59:48.216788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.992 [2024-05-15 06:59:48.216800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.216996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.217304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169ec60 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.218029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.993 [2024-05-15 06:59:48.218069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.993 [2024-05-15 06:59:48.218099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.993 [2024-05-15 06:59:48.218126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:33.993 [2024-05-15 06:59:48.218153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5f210 is same with the state(5) to be set 00:22:33.993 [2024-05-15 06:59:48.218301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.993 [2024-05-15 06:59:48.218856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.993 [2024-05-15 06:59:48.218871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.218885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.218900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.218914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.218937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.218953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.218980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.218995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with [2024-05-15 06:59:48.219274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:33.994 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.219309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:12[2024-05-15 06:59:48.219402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21760 len:12[2024-05-15 06:59:48.219442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.219457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with [2024-05-15 06:59:48.219597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23040 len:12the state(5) to be set 00:22:33.994 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with [2024-05-15 06:59:48.219613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:33.994 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with [2024-05-15 06:59:48.219676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:33.994 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.994 [2024-05-15 06:59:48.219693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.994 [2024-05-15 06:59:48.219718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.994 [2024-05-15 06:59:48.219731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.219754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.219778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.219803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.219832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.219845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.219858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.219871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.219898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.219923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.219943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with [2024-05-15 06:59:48.219949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27136 len:12the state(5) to be set 00:22:33.995 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27264 len:12[2024-05-15 06:59:48.220102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with [2024-05-15 06:59:48.220152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27392 len:1the state(5) to be set 00:22:33.995 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ddb0 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.220251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.995 [2024-05-15 06:59:48.220589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.995 [2024-05-15 06:59:48.220715] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10d6f40 was disconnected and freed. reset controller. 00:22:33.995 [2024-05-15 06:59:48.222816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.222846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.222860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.222873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.222885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.222897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.222910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.222922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.222943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.222956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.222976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.222988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.995 [2024-05-15 06:59:48.223001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.223589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:33.996 [2024-05-15 06:59:48.224099] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.267 [2024-05-15 06:59:48.224326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:34.267 [2024-05-15 06:59:48.224343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:34.267 [2024-05-15 06:59:48.224344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5f210 (9): Bad file descriptor 00:22:34.267 [2024-05-15 06:59:48.224356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:34.267 [2024-05-15 06:59:48.224368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f110 is same with the state(5) to be set 00:22:34.267 [2024-05-15 06:59:48.226336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.226970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.226986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.267 [2024-05-15 06:59:48.227437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.267 [2024-05-15 06:59:48.227451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.227795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.227827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.227841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.227857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28928 len:1[2024-05-15 06:59:48.227859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.227872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.227873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.227888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with [2024-05-15 06:59:48.227890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29056 len:12the state(5) to be set 00:22:34.268 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with [2024-05-15 06:59:48.227909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:34.268 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.227927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.227949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.227965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.227980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.227985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.227995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.227998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.228009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.228011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.228025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.228026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.228037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.228040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.228050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.228056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 [2024-05-15 06:59:48.228062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.228070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.228075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.228086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128[2024-05-15 06:59:48.228087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169f5c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.268 the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.228106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.228201] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf4ee70 was disconnected and freed. reset controller. 00:22:34.268 [2024-05-15 06:59:48.228974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.268 [2024-05-15 06:59:48.229000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.229015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.268 [2024-05-15 06:59:48.229028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.229043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.268 [2024-05-15 06:59:48.229055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.229069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.268 [2024-05-15 06:59:48.229082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.229095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125a60 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.229136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.268 [2024-05-15 06:59:48.229156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.229171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.268 [2024-05-15 06:59:48.229184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.229198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.268 [2024-05-15 06:59:48.229211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.229230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.268 [2024-05-15 06:59:48.229243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.268 [2024-05-15 06:59:48.229256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf61f70 is same with the state(5) to be set 00:22:34.268 [2024-05-15 06:59:48.229335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.268 [2024-05-15 06:59:48.229356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.229371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.269 [2024-05-15 06:59:48.229384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.229398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.269 [2024-05-15 06:59:48.229417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.229431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.269 [2024-05-15 06:59:48.229444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.229457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9a4f0 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231148] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:34.269 [2024-05-15 06:59:48.231155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231182] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controlle[2024-05-15 06:59:48.231187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with r 00:22:34.269 the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with [2024-05-15 06:59:48.231212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1125a60 (9): the state(5) to be set 00:22:34.269 Bad file descriptor 00:22:34.269 [2024-05-15 06:59:48.231240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:12[2024-05-15 06:59:48.231727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.269 the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.231755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:12[2024-05-15 06:59:48.231771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.269 the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with [2024-05-15 06:59:48.231791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:34.269 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.231806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.269 [2024-05-15 06:59:48.231818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.231831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.269 [2024-05-15 06:59:48.231855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.231867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.269 [2024-05-15 06:59:48.231880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.231892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.269 [2024-05-15 06:59:48.231917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.231938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.269 [2024-05-15 06:59:48.231952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.231970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.269 [2024-05-15 06:59:48.231990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.269 [2024-05-15 06:59:48.231999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.269 [2024-05-15 06:59:48.232003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.232015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.232016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.232033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fa50 is same with the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.232049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.232917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with [2024-05-15 06:59:48.232928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:34.270 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.232955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26496 len:1[2024-05-15 06:59:48.232955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.232977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.232977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.232993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.232995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.233006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.233010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.233018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.233026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.233031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.233040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.233049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.233056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 [2024-05-15 06:59:48.233062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.233070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.270 [2024-05-15 06:59:48.233075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.233086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27008 len:1[2024-05-15 06:59:48.233087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.270 the state(5) to be set 00:22:34.270 [2024-05-15 06:59:48.233102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 [2024-05-15 06:59:48.233114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with [2024-05-15 06:59:48.233123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27136 len:12the state(5) to be set 00:22:34.271 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with [2024-05-15 06:59:48.233149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:34.271 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 [2024-05-15 06:59:48.233169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.233194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.233244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 [2024-05-15 06:59:48.233310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with [2024-05-15 06:59:48.233321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27648 len:12the state(5) to be set 00:22:34.271 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 [2024-05-15 06:59:48.233349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 [2024-05-15 06:59:48.233374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with [2024-05-15 06:59:48.233388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27904 len:1the state(5) to be set 00:22:34.271 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 [2024-05-15 06:59:48.233415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 [2024-05-15 06:59:48.233465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 [2024-05-15 06:59:48.233506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28288 len:1[2024-05-15 06:59:48.233519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.233532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 [2024-05-15 06:59:48.233572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.233597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.271 [2024-05-15 06:59:48.233623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.271 [2024-05-15 06:59:48.233628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.271 [2024-05-15 06:59:48.233635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.272 [2024-05-15 06:59:48.233648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.233660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.272 the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.272 [2024-05-15 06:59:48.233689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.272 [2024-05-15 06:59:48.233702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.272 [2024-05-15 06:59:48.233714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.233726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.272 the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.272 [2024-05-15 06:59:48.233753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.272 [2024-05-15 06:59:48.233765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.272 [2024-05-15 06:59:48.233777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x169fdb0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.233788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.272 [2024-05-15 06:59:48.233805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.272 [2024-05-15 06:59:48.233819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.272 [2024-05-15 06:59:48.233835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.272 [2024-05-15 06:59:48.233848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.272 [2024-05-15 06:59:48.233864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.272 [2024-05-15 06:59:48.233884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.272 [2024-05-15 06:59:48.233913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.272 [2024-05-15 06:59:48.233949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.272 [2024-05-15 06:59:48.233983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2171ed0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.234536] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2171ed0 was disconnected and freed. reset controller. 00:22:34.272 [2024-05-15 06:59:48.234926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.234961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.234983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.234995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.235258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0260 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.236410] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:34.272 [2024-05-15 06:59:48.236444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9a4f0 (9): Bad file descriptor 00:22:34.272 [2024-05-15 06:59:48.237252] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:34.272 [2024-05-15 06:59:48.238590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.238618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.238631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.238644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.238656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.238668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.238680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.238692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.238709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.238722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.272 [2024-05-15 06:59:48.238734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-15 06:59:48.238842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:34.273 the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.238869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.273 [2024-05-15 06:59:48.238882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.238894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.273 [2024-05-15 06:59:48.238907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 06:59:48.238920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.273 [2024-05-15 06:59:48.238954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with [2024-05-15 06:59:48.238958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:22:34.273 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.238971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with [2024-05-15 06:59:48.238972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120490 is same the state(5) to be set 00:22:34.273 with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.238988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61f70 (9): B[2024-05-15 06:59:48.239012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with ad file descriptor 00:22:34.273 the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-15 06:59:48.239064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:34.273 the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.239090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.273 [2024-05-15 06:59:48.239103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.239115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.273 [2024-05-15 06:59:48.239127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.239140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-05-15 06:59:48.239152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:34.273 the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 06:59:48.239166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11208c0 is same [2024-05-15 06:59:48.239184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with with the state(5) to be set 00:22:34.273 the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-15 06:59:48.239246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:34.273 the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.239272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.273 [2024-05-15 06:59:48.239284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.239296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.273 [2024-05-15 06:59:48.239309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.239321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-05-15 06:59:48.239334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:34.273 the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with [2024-05-15 06:59:48.239347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:22:34.273 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.239361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec2a50 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-05-15 06:59:48.239412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a06f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:22:34.273 the state(5) to be set 00:22:34.273 [2024-05-15 06:59:48.239429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.239444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.273 [2024-05-15 06:59:48.239458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.273 [2024-05-15 06:59:48.239471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.273 [2024-05-15 06:59:48.239484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.239498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.274 [2024-05-15 06:59:48.239511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.239523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58e60 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.239638] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:34.274 [2024-05-15 06:59:48.239712] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:34.274 [2024-05-15 06:59:48.240075] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:34.274 [2024-05-15 06:59:48.240314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.240952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.240984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.240990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.240999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.241005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27392 len:12[2024-05-15 06:59:48.241017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.241032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.241059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.241072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.241084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.241096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.241120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.241132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with [2024-05-15 06:59:48.241144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32768 len:1the state(5) to be set 00:22:34.274 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.241159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.241170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.241182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 [2024-05-15 06:59:48.241195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with the state(5) to be set 00:22:34.274 [2024-05-15 06:59:48.241207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with [2024-05-15 06:59:48.241207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33024 len:1the state(5) to be set 00:22:34.274 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.274 [2024-05-15 06:59:48.241222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 06:59:48.241224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a0b80 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.274 the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with [2024-05-15 06:59:48.241681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:34.275 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with [2024-05-15 06:59:48.241758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128the state(5) to be set 00:22:34.275 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.275 [2024-05-15 06:59:48.241798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.275 [2024-05-15 06:59:48.241810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a0210 is same [2024-05-15 06:59:48.241822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with with the state(5) to be set 00:22:34.275 the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241924] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10a0210 was disconnected and fr[2024-05-15 06:59:48.241938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with eed. reset controller. 00:22:34.275 the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.241994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.242029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.242046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.242058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.242070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.242083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.242095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.242107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.275 [2024-05-15 06:59:48.242119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.242543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d900 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.243449] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:34.276 [2024-05-15 06:59:48.243483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1120490 (9): Bad file descriptor 00:22:34.276 [2024-05-15 06:59:48.244109] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:34.276 [2024-05-15 06:59:48.244558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:34.276 [2024-05-15 06:59:48.244582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.276 [2024-05-15 06:59:48.244597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9a4f0 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.244641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:34.276 [2024-05-15 06:59:48.244661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.276 [2024-05-15 06:59:48.244676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5f210 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.244774] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:34.276 [2024-05-15 06:59:48.244878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9a4f0 (9): Bad file descriptor 00:22:34.276 [2024-05-15 06:59:48.244922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:34.276 [2024-05-15 06:59:48.244950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.276 [2024-05-15 06:59:48.244965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125a60 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.244988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5f210 (9): Bad file descriptor 00:22:34.276 [2024-05-15 06:59:48.245008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1125a60 (9): Bad file descriptor 00:22:34.276 [2024-05-15 06:59:48.245117] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:34.276 [2024-05-15 06:59:48.245153] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:34.276 [2024-05-15 06:59:48.245171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:34.276 [2024-05-15 06:59:48.245187] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:34.276 [2024-05-15 06:59:48.245209] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:34.276 [2024-05-15 06:59:48.245229] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:34.276 [2024-05-15 06:59:48.245242] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:34.276 [2024-05-15 06:59:48.245306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.276 [2024-05-15 06:59:48.245329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.276 [2024-05-15 06:59:48.245357] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:34.276 [2024-05-15 06:59:48.245374] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:34.276 [2024-05-15 06:59:48.245387] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:34.276 [2024-05-15 06:59:48.245441] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.276 [2024-05-15 06:59:48.245463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1120490 (9): Bad file descriptor 00:22:34.276 [2024-05-15 06:59:48.245514] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:34.276 [2024-05-15 06:59:48.245531] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:34.276 [2024-05-15 06:59:48.245544] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:34.276 [2024-05-15 06:59:48.245596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.276 [2024-05-15 06:59:48.246528] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.276 [2024-05-15 06:59:48.246781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.276 [2024-05-15 06:59:48.246997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.276 [2024-05-15 06:59:48.247023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf5f210 with addr=10.0.0.2, port=4420 00:22:34.276 [2024-05-15 06:59:48.247039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5f210 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.247094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5f210 (9): Bad file descriptor 00:22:34.276 [2024-05-15 06:59:48.247148] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:34.276 [2024-05-15 06:59:48.247165] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:34.276 [2024-05-15 06:59:48.247178] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:34.276 [2024-05-15 06:59:48.247232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.276 [2024-05-15 06:59:48.247331] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:34.276 [2024-05-15 06:59:48.247567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.276 [2024-05-15 06:59:48.247747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.276 [2024-05-15 06:59:48.247772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9a4f0 with addr=10.0.0.2, port=4420 00:22:34.276 [2024-05-15 06:59:48.247787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9a4f0 is same with the state(5) to be set 00:22:34.276 [2024-05-15 06:59:48.247841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9a4f0 (9): Bad file descriptor 00:22:34.276 [2024-05-15 06:59:48.247896] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:34.276 [2024-05-15 06:59:48.247912] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:34.276 [2024-05-15 06:59:48.247926] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:34.276 [2024-05-15 06:59:48.247987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.276 [2024-05-15 06:59:48.248840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11208c0 (9): Bad file descriptor 00:22:34.276 [2024-05-15 06:59:48.248875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec2a50 (9): Bad file descriptor 00:22:34.276 [2024-05-15 06:59:48.248904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58e60 (9): Bad file descriptor 00:22:34.276 [2024-05-15 06:59:48.248961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.276 [2024-05-15 06:59:48.248983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.276 [2024-05-15 06:59:48.248998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.277 [2024-05-15 06:59:48.249011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.277 [2024-05-15 06:59:48.249038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.277 [2024-05-15 06:59:48.249064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58a30 is same with the state(5) to be set 00:22:34.277 [2024-05-15 06:59:48.249120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.277 [2024-05-15 06:59:48.249140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.277 [2024-05-15 06:59:48.249168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.277 [2024-05-15 06:59:48.249194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.277 [2024-05-15 06:59:48.249227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e3830 is same with the state(5) to be set 00:22:34.277 [2024-05-15 06:59:48.249368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.249978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.249993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.277 [2024-05-15 06:59:48.250017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.277 [2024-05-15 06:59:48.250031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d7260 is same with the state(5) to be set 00:22:34.277 [2024-05-15 06:59:48.251132] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:34.277 [2024-05-15 06:59:48.251420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.277 [2024-05-15 06:59:48.251635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.277 [2024-05-15 06:59:48.251659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf61f70 with addr=10.0.0.2, port=4420 00:22:34.277 [2024-05-15 06:59:48.251675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf61f70 is same with the state(5) to be set 00:22:34.277 [2024-05-15 06:59:48.252009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61f70 (9): Bad file descriptor 00:22:34.277 [2024-05-15 06:59:48.252085] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:34.277 [2024-05-15 06:59:48.252105] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:34.277 [2024-05-15 06:59:48.252118] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:34.277 [2024-05-15 06:59:48.252178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.277 [2024-05-15 06:59:48.253853] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:34.277 [2024-05-15 06:59:48.254325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.277 [2024-05-15 06:59:48.254505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.277 [2024-05-15 06:59:48.254532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1120490 with addr=10.0.0.2, port=4420 00:22:34.277 [2024-05-15 06:59:48.254548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120490 is same with the state(5) to be set 00:22:34.277 [2024-05-15 06:59:48.254599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1120490 (9): Bad file descriptor 00:22:34.277 [2024-05-15 06:59:48.254650] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:34.277 [2024-05-15 06:59:48.254667] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:34.277 [2024-05-15 06:59:48.254680] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:34.277 [2024-05-15 06:59:48.254730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.277 [2024-05-15 06:59:48.254959] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:34.277 [2024-05-15 06:59:48.255204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.277 [2024-05-15 06:59:48.255391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.277 [2024-05-15 06:59:48.255416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1125a60 with addr=10.0.0.2, port=4420 00:22:34.277 [2024-05-15 06:59:48.255431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125a60 is same with the state(5) to be set 00:22:34.277 [2024-05-15 06:59:48.255482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1125a60 (9): Bad file descriptor 00:22:34.277 [2024-05-15 06:59:48.255552] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:34.277 [2024-05-15 06:59:48.255574] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:34.277 [2024-05-15 06:59:48.255587] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:34.278 [2024-05-15 06:59:48.255637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.278 [2024-05-15 06:59:48.256659] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.278 [2024-05-15 06:59:48.256877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.278 [2024-05-15 06:59:48.257265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.278 [2024-05-15 06:59:48.257291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf5f210 with addr=10.0.0.2, port=4420 00:22:34.278 [2024-05-15 06:59:48.257307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5f210 is same with the state(5) to be set 00:22:34.278 [2024-05-15 06:59:48.257363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5f210 (9): Bad file descriptor 00:22:34.278 [2024-05-15 06:59:48.257417] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:34.278 [2024-05-15 06:59:48.257442] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:34.278 [2024-05-15 06:59:48.257456] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:34.278 [2024-05-15 06:59:48.257516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.278 [2024-05-15 06:59:48.257572] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:34.278 [2024-05-15 06:59:48.257787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.278 [2024-05-15 06:59:48.257964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.278 [2024-05-15 06:59:48.257996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9a4f0 with addr=10.0.0.2, port=4420 00:22:34.278 [2024-05-15 06:59:48.258012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9a4f0 is same with the state(5) to be set 00:22:34.278 [2024-05-15 06:59:48.258062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9a4f0 (9): Bad file descriptor 00:22:34.278 [2024-05-15 06:59:48.258112] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:34.278 [2024-05-15 06:59:48.258129] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:34.278 [2024-05-15 06:59:48.258142] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:34.278 [2024-05-15 06:59:48.258192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.278 [2024-05-15 06:59:48.258893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58a30 (9): Bad file descriptor 00:22:34.278 [2024-05-15 06:59:48.258936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e3830 (9): Bad file descriptor 00:22:34.278 [2024-05-15 06:59:48.259083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.259970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.259986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.260003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.278 [2024-05-15 06:59:48.260019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.278 [2024-05-15 06:59:48.260032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.260965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.260979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.261000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.261014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.261030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.261043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.261057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d8820 is same with the state(5) to be set 00:22:34.279 [2024-05-15 06:59:48.262332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.262356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.262376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.262391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.262407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.262421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.262438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.262452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.262467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.262486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.262502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.262516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.262531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.262545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.279 [2024-05-15 06:59:48.262560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.279 [2024-05-15 06:59:48.262574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.262965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.262979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.280 [2024-05-15 06:59:48.263637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.280 [2024-05-15 06:59:48.263651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.263667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.263681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.263697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.263711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.263727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.263740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.263756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.263770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.263785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.263799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.263815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.263829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.263845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.263858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.263874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.263887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.263903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.263916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.263938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.263955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.263970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.263984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.264002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.264019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.264035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.264049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.264064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.264077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.264092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.264106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.264121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.264135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.264150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.264164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.264179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.264193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.264208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.264222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.264247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.264261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.264276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.264290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.264306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109ec30 is same with the state(5) to be set 00:22:34.281 [2024-05-15 06:59:48.265515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.265954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.265968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.266000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.266014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.266029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.266043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.266058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.266072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.266087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.281 [2024-05-15 06:59:48.266100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.281 [2024-05-15 06:59:48.266116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.266975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.266989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.282 [2024-05-15 06:59:48.267351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.282 [2024-05-15 06:59:48.267365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.267379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.267394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.267408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.267423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.267437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.267452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.267466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.267479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a17f0 is same with the state(5) to be set 00:22:34.283 [2024-05-15 06:59:48.268686] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:34.283 [2024-05-15 06:59:48.268720] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:34.283 [2024-05-15 06:59:48.268739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:34.283 [2024-05-15 06:59:48.269248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.283 [2024-05-15 06:59:48.269448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.283 [2024-05-15 06:59:48.269474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec2a50 with addr=10.0.0.2, port=4420 00:22:34.283 [2024-05-15 06:59:48.269490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec2a50 is same with the state(5) to be set 00:22:34.283 [2024-05-15 06:59:48.269702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.283 [2024-05-15 06:59:48.269877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.283 [2024-05-15 06:59:48.269901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11208c0 with addr=10.0.0.2, port=4420 00:22:34.283 [2024-05-15 06:59:48.269915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11208c0 is same with the state(5) to be set 00:22:34.283 [2024-05-15 06:59:48.270097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.283 [2024-05-15 06:59:48.270271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.283 [2024-05-15 06:59:48.270294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf58e60 with addr=10.0.0.2, port=4420 00:22:34.283 [2024-05-15 06:59:48.270309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58e60 is same with the state(5) to be set 00:22:34.283 [2024-05-15 06:59:48.271170] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:34.283 [2024-05-15 06:59:48.271197] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:34.283 [2024-05-15 06:59:48.271221] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:34.283 [2024-05-15 06:59:48.271236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:34.283 [2024-05-15 06:59:48.271252] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:34.283 [2024-05-15 06:59:48.271317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec2a50 (9): Bad file descriptor 00:22:34.283 [2024-05-15 06:59:48.271341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11208c0 (9): Bad file descriptor 00:22:34.283 [2024-05-15 06:59:48.271359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58e60 (9): Bad file descriptor 00:22:34.283 [2024-05-15 06:59:48.271451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.271981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.271997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.272011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.272026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.272039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.272055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.272068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.272084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.272098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.272112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a2dd0 is same with the state(5) to be set 00:22:34.283 [2024-05-15 06:59:48.273170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.273195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.273218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.273234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.273250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.273263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.273279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.273293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.273308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.273322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.283 [2024-05-15 06:59:48.273338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.283 [2024-05-15 06:59:48.273351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.273972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.273988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.284 [2024-05-15 06:59:48.274370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.284 [2024-05-15 06:59:48.274383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.274964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.274991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.275015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.275032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.275046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.275062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.275076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.275091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.275105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.275120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:34.285 [2024-05-15 06:59:48.275135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.285 [2024-05-15 06:59:48.275149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fcf2a0 is same with the state(5) to be set 00:22:34.285 [2024-05-15 06:59:48.276338] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:34.285 task offset: 24320 on job bdev=Nvme1n1 fails 00:22:34.285 00:22:34.285 Latency(us) 00:22:34.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.285 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.285 Job: Nvme1n1 ended in about 0.59 seconds with error 00:22:34.285 Verification LBA range: start 0x0 length 0x400 00:22:34.285 Nvme1n1 : 0.59 277.89 17.37 108.44 0.00 164339.08 31263.10 180199.73 00:22:34.285 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.285 Job: Nvme2n1 ended in about 0.60 seconds with error 00:22:34.285 Verification LBA range: start 0x0 length 0x400 00:22:34.285 Nvme2n1 : 0.60 292.52 18.28 95.28 0.00 161485.14 24369.68 143693.75 00:22:34.285 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.285 Job: Nvme3n1 ended in about 0.62 seconds with error 00:22:34.285 Verification LBA range: start 0x0 length 0x400 00:22:34.285 Nvme3n1 : 0.62 405.74 25.36 33.95 0.00 136641.49 26796.94 128159.29 00:22:34.285 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.285 Job: Nvme4n1 ended in about 0.63 seconds with error 00:22:34.285 Verification LBA range: start 0x0 length 0x400 00:22:34.285 Nvme4n1 : 0.63 333.44 20.84 101.62 0.00 140719.56 10437.21 119615.34 00:22:34.285 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.285 Job: Nvme5n1 ended in about 0.63 seconds with error 00:22:34.285 Verification LBA range: start 0x0 length 0x400 00:22:34.285 Nvme5n1 : 0.63 259.08 16.19 101.11 0.00 167982.52 91653.31 142140.30 00:22:34.285 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.285 Job: Nvme6n1 ended in about 0.61 seconds with error 00:22:34.285 Verification LBA range: start 0x0 length 0x400 00:22:34.285 Nvme6n1 : 0.61 355.20 22.20 78.57 0.00 137078.60 4757.43 129712.73 00:22:34.285 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.285 Job: Nvme7n1 ended in about 0.64 seconds with error 00:22:34.285 Verification LBA range: start 0x0 length 0x400 00:22:34.285 Nvme7n1 : 0.64 326.96 20.43 100.60 0.00 138071.79 68739.98 112624.83 00:22:34.285 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.285 Job: Nvme8n1 ended in about 0.64 seconds with error 00:22:34.285 Verification LBA range: start 0x0 length 0x400 00:22:34.285 Nvme8n1 : 0.64 391.80 24.49 32.78 0.00 130067.90 46409.20 119615.34 00:22:34.285 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.285 Job: Nvme9n1 ended in about 0.64 seconds with error 00:22:34.285 Verification LBA range: start 0x0 length 0x400 00:22:34.285 Nvme9n1 : 0.64 254.73 15.92 99.41 0.00 162715.80 83886.08 149907.53 00:22:34.285 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.285 Job: Nvme10n1 ended in about 0.60 seconds with error 00:22:34.285 Verification LBA range: start 0x0 length 0x400 00:22:34.285 Nvme10n1 : 0.60 271.50 16.97 105.95 0.00 149560.05 81167.55 140586.86 00:22:34.285 =================================================================================================================== 00:22:34.285 Total : 3168.86 198.05 857.70 0.00 147825.63 4757.43 180199.73 00:22:34.285 [2024-05-15 06:59:48.305470] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:34.285 [2024-05-15 06:59:48.305556] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:34.285 [2024-05-15 06:59:48.305982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.285 [2024-05-15 06:59:48.306170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.285 [2024-05-15 06:59:48.306197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf61f70 with addr=10.0.0.2, port=4420 00:22:34.285 [2024-05-15 06:59:48.306216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf61f70 is same with the state(5) to be set 00:22:34.285 [2024-05-15 06:59:48.306389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.285 [2024-05-15 06:59:48.306574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.285 [2024-05-15 06:59:48.306598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1120490 with addr=10.0.0.2, port=4420 00:22:34.286 [2024-05-15 06:59:48.306613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1120490 is same with the state(5) to be set 00:22:34.286 [2024-05-15 06:59:48.306812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.306980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.307006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1125a60 with addr=10.0.0.2, port=4420 00:22:34.286 [2024-05-15 06:59:48.307022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125a60 is same with the state(5) to be set 00:22:34.286 [2024-05-15 06:59:48.307187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.307349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.307374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf5f210 with addr=10.0.0.2, port=4420 00:22:34.286 [2024-05-15 06:59:48.307389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5f210 is same with the state(5) to be set 00:22:34.286 [2024-05-15 06:59:48.307549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.307730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.307755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9a4f0 with addr=10.0.0.2, port=4420 00:22:34.286 [2024-05-15 06:59:48.307770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9a4f0 is same with the state(5) to be set 00:22:34.286 [2024-05-15 06:59:48.307786] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.307799] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.307815] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:34.286 [2024-05-15 06:59:48.307841] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.307856] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.307869] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:34.286 [2024-05-15 06:59:48.307886] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.307900] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.307913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:34.286 [2024-05-15 06:59:48.308090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.286 [2024-05-15 06:59:48.308115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.286 [2024-05-15 06:59:48.308128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.286 [2024-05-15 06:59:48.308336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.308505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.308530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf58a30 with addr=10.0.0.2, port=4420 00:22:34.286 [2024-05-15 06:59:48.308545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58a30 is same with the state(5) to be set 00:22:34.286 [2024-05-15 06:59:48.308731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.308922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.308953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10e3830 with addr=10.0.0.2, port=4420 00:22:34.286 [2024-05-15 06:59:48.308968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e3830 is same with the state(5) to be set 00:22:34.286 [2024-05-15 06:59:48.308994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf61f70 (9): Bad file descriptor 00:22:34.286 [2024-05-15 06:59:48.309016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1120490 (9): Bad file descriptor 00:22:34.286 [2024-05-15 06:59:48.309034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1125a60 (9): Bad file descriptor 00:22:34.286 [2024-05-15 06:59:48.309051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5f210 (9): Bad file descriptor 00:22:34.286 [2024-05-15 06:59:48.309067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9a4f0 (9): Bad file descriptor 00:22:34.286 [2024-05-15 06:59:48.309131] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:34.286 [2024-05-15 06:59:48.309154] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:34.286 [2024-05-15 06:59:48.309172] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:34.286 [2024-05-15 06:59:48.309196] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:34.286 [2024-05-15 06:59:48.309214] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:34.286 [2024-05-15 06:59:48.309823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58a30 (9): Bad file descriptor 00:22:34.286 [2024-05-15 06:59:48.309852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e3830 (9): Bad file descriptor 00:22:34.286 [2024-05-15 06:59:48.309869] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.309882] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.309895] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:34.286 [2024-05-15 06:59:48.309912] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.309927] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.309948] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:34.286 [2024-05-15 06:59:48.309965] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.309979] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.309991] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:34.286 [2024-05-15 06:59:48.310006] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.310020] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.310032] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:34.286 [2024-05-15 06:59:48.310048] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.310062] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.310074] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:34.286 [2024-05-15 06:59:48.310144] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:34.286 [2024-05-15 06:59:48.310168] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:34.286 [2024-05-15 06:59:48.310184] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:34.286 [2024-05-15 06:59:48.310200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.286 [2024-05-15 06:59:48.310212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.286 [2024-05-15 06:59:48.310224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.286 [2024-05-15 06:59:48.310235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.286 [2024-05-15 06:59:48.310269] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.310285] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.310297] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:34.286 [2024-05-15 06:59:48.310313] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.310332] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.310345] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:34.286 [2024-05-15 06:59:48.310388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.286 [2024-05-15 06:59:48.310416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.286 [2024-05-15 06:59:48.310431] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.286 [2024-05-15 06:59:48.310591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.310759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.310783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf58e60 with addr=10.0.0.2, port=4420 00:22:34.286 [2024-05-15 06:59:48.310797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf58e60 is same with the state(5) to be set 00:22:34.286 [2024-05-15 06:59:48.310961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.311133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.311157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11208c0 with addr=10.0.0.2, port=4420 00:22:34.286 [2024-05-15 06:59:48.311172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11208c0 is same with the state(5) to be set 00:22:34.286 [2024-05-15 06:59:48.311334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.311504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.286 [2024-05-15 06:59:48.311528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xec2a50 with addr=10.0.0.2, port=4420 00:22:34.286 [2024-05-15 06:59:48.311543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec2a50 is same with the state(5) to be set 00:22:34.286 [2024-05-15 06:59:48.311585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf58e60 (9): Bad file descriptor 00:22:34.286 [2024-05-15 06:59:48.311609] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11208c0 (9): Bad file descriptor 00:22:34.286 [2024-05-15 06:59:48.311627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec2a50 (9): Bad file descriptor 00:22:34.286 [2024-05-15 06:59:48.311666] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.311684] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.311698] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:34.286 [2024-05-15 06:59:48.311715] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:34.286 [2024-05-15 06:59:48.311729] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:34.286 [2024-05-15 06:59:48.311740] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:34.287 [2024-05-15 06:59:48.311755] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:34.287 [2024-05-15 06:59:48.311769] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:34.287 [2024-05-15 06:59:48.311781] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:34.287 [2024-05-15 06:59:48.311818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.287 [2024-05-15 06:59:48.311836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.287 [2024-05-15 06:59:48.311852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.853 06:59:48 -- target/shutdown.sh@135 -- # nvmfpid= 00:22:34.853 06:59:48 -- target/shutdown.sh@138 -- # sleep 1 00:22:35.786 06:59:49 -- target/shutdown.sh@141 -- # kill -9 570273 00:22:35.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (570273) - No such process 00:22:35.786 06:59:49 -- target/shutdown.sh@141 -- # true 00:22:35.786 06:59:49 -- target/shutdown.sh@143 -- # stoptarget 00:22:35.786 06:59:49 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:35.786 06:59:49 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:35.786 06:59:49 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.786 06:59:49 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:35.786 06:59:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:35.786 06:59:49 -- nvmf/common.sh@116 -- # sync 00:22:35.786 06:59:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:35.786 06:59:49 -- nvmf/common.sh@119 -- # set +e 00:22:35.786 06:59:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:35.786 06:59:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:35.786 rmmod nvme_tcp 00:22:35.786 rmmod nvme_fabrics 00:22:35.786 rmmod nvme_keyring 00:22:35.786 06:59:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:35.787 06:59:49 -- nvmf/common.sh@123 -- # set -e 00:22:35.787 06:59:49 -- nvmf/common.sh@124 -- # return 0 00:22:35.787 06:59:49 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:22:35.787 06:59:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:35.787 06:59:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:35.787 06:59:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:35.787 06:59:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.787 06:59:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:35.787 06:59:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.787 06:59:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.787 06:59:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.691 06:59:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:37.691 00:22:37.691 real 0m7.598s 00:22:37.691 user 0m18.149s 00:22:37.691 sys 0m1.484s 00:22:37.691 06:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.691 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:37.691 ************************************ 00:22:37.691 END TEST nvmf_shutdown_tc3 00:22:37.691 ************************************ 00:22:37.950 06:59:51 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:22:37.950 00:22:37.950 real 0m29.103s 00:22:37.950 user 1m22.620s 00:22:37.950 sys 0m6.619s 00:22:37.950 06:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.950 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:37.950 ************************************ 00:22:37.950 END TEST nvmf_shutdown 00:22:37.950 ************************************ 00:22:37.950 06:59:51 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:22:37.950 06:59:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:37.950 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:37.950 06:59:51 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:22:37.950 06:59:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:37.950 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:37.950 06:59:51 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:22:37.950 06:59:51 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:37.950 06:59:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:37.950 06:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:37.950 06:59:51 -- common/autotest_common.sh@10 -- # set +x 00:22:37.950 ************************************ 00:22:37.950 START TEST nvmf_multicontroller 00:22:37.950 ************************************ 00:22:37.950 06:59:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:37.950 * Looking for test storage... 00:22:37.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.950 06:59:52 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.950 06:59:52 -- nvmf/common.sh@7 -- # uname -s 00:22:37.950 06:59:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.950 06:59:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.950 06:59:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.950 06:59:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.950 06:59:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.950 06:59:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.950 06:59:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.950 06:59:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.950 06:59:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.950 06:59:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.950 06:59:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.950 06:59:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.950 06:59:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.950 06:59:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.950 06:59:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.950 06:59:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.950 06:59:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.950 06:59:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.950 06:59:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.951 06:59:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.951 06:59:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.951 06:59:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.951 06:59:52 -- paths/export.sh@5 -- # export PATH 00:22:37.951 06:59:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.951 06:59:52 -- nvmf/common.sh@46 -- # : 0 00:22:37.951 06:59:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:37.951 06:59:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:37.951 06:59:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:37.951 06:59:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.951 06:59:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.951 06:59:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:37.951 06:59:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:37.951 06:59:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:37.951 06:59:52 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.951 06:59:52 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.951 06:59:52 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:37.951 06:59:52 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:37.951 06:59:52 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:37.951 06:59:52 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:37.951 06:59:52 -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:37.951 06:59:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:37.951 06:59:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.951 06:59:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:37.951 06:59:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:37.951 06:59:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:37.951 06:59:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.951 06:59:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.951 06:59:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.951 06:59:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:37.951 06:59:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:37.951 06:59:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:37.951 06:59:52 -- common/autotest_common.sh@10 -- # set +x 00:22:40.513 06:59:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:40.513 06:59:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:40.513 06:59:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:40.513 06:59:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:40.513 06:59:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:40.513 06:59:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:40.513 06:59:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:40.513 06:59:54 -- nvmf/common.sh@294 -- # net_devs=() 00:22:40.513 06:59:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:40.513 06:59:54 -- nvmf/common.sh@295 -- # e810=() 00:22:40.513 06:59:54 -- nvmf/common.sh@295 -- # local -ga e810 00:22:40.513 06:59:54 -- nvmf/common.sh@296 -- # x722=() 00:22:40.513 06:59:54 -- nvmf/common.sh@296 -- # local -ga x722 00:22:40.513 06:59:54 -- nvmf/common.sh@297 -- # mlx=() 00:22:40.513 06:59:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:40.513 06:59:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.513 06:59:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.513 06:59:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.513 06:59:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.513 06:59:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.513 06:59:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.513 06:59:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.513 06:59:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.513 06:59:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.513 06:59:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.513 06:59:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.513 06:59:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:40.513 06:59:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:40.513 06:59:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:40.513 06:59:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:40.513 06:59:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:40.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:40.513 06:59:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:40.513 06:59:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:40.513 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:40.513 06:59:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:40.513 06:59:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:40.513 06:59:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.513 06:59:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:40.513 06:59:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.513 06:59:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:40.513 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:40.513 06:59:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.513 06:59:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:40.513 06:59:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.513 06:59:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:40.513 06:59:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.513 06:59:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:40.513 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:40.513 06:59:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.513 06:59:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:40.513 06:59:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:40.513 06:59:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:40.513 06:59:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.513 06:59:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.513 06:59:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.513 06:59:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:40.513 06:59:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.513 06:59:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.513 06:59:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:40.513 06:59:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.513 06:59:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.513 06:59:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:40.513 06:59:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:40.513 06:59:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.513 06:59:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.513 06:59:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.513 06:59:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.513 06:59:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:40.513 06:59:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.513 06:59:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.513 06:59:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.513 06:59:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:40.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:22:40.513 00:22:40.513 --- 10.0.0.2 ping statistics --- 00:22:40.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.513 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:22:40.513 06:59:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:22:40.513 00:22:40.513 --- 10.0.0.1 ping statistics --- 00:22:40.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.513 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:22:40.513 06:59:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.513 06:59:54 -- nvmf/common.sh@410 -- # return 0 00:22:40.513 06:59:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:40.513 06:59:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.513 06:59:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:40.513 06:59:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.513 06:59:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:40.513 06:59:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:40.513 06:59:54 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:40.513 06:59:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:40.513 06:59:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:40.513 06:59:54 -- common/autotest_common.sh@10 -- # set +x 00:22:40.513 06:59:54 -- nvmf/common.sh@469 -- # nvmfpid=572986 00:22:40.513 06:59:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:40.513 06:59:54 -- nvmf/common.sh@470 -- # waitforlisten 572986 00:22:40.513 06:59:54 -- common/autotest_common.sh@819 -- # '[' -z 572986 ']' 00:22:40.513 06:59:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.513 06:59:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:40.513 06:59:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.513 06:59:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:40.513 06:59:54 -- common/autotest_common.sh@10 -- # set +x 00:22:40.513 [2024-05-15 06:59:54.701674] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:40.513 [2024-05-15 06:59:54.701769] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.513 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.772 [2024-05-15 06:59:54.782836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:40.772 [2024-05-15 06:59:54.898690] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:40.772 [2024-05-15 06:59:54.898854] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.772 [2024-05-15 06:59:54.898874] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.772 [2024-05-15 06:59:54.898889] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.772 [2024-05-15 06:59:54.898990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.772 [2024-05-15 06:59:54.899080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.772 [2024-05-15 06:59:54.899084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.705 06:59:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:41.705 06:59:55 -- common/autotest_common.sh@852 -- # return 0 00:22:41.705 06:59:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:41.705 06:59:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:41.705 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.705 06:59:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.705 06:59:55 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:41.706 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 [2024-05-15 06:59:55.688121] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.706 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.706 06:59:55 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:41.706 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 Malloc0 00:22:41.706 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.706 06:59:55 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.706 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.706 06:59:55 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:41.706 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.706 06:59:55 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.706 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 [2024-05-15 06:59:55.751815] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.706 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.706 06:59:55 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:41.706 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 [2024-05-15 06:59:55.759724] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:41.706 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.706 06:59:55 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:41.706 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 Malloc1 00:22:41.706 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.706 06:59:55 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:41.706 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.706 06:59:55 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:41.706 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.706 06:59:55 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:41.706 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.706 06:59:55 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:41.706 06:59:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:41.706 06:59:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:41.706 06:59:55 -- host/multicontroller.sh@44 -- # bdevperf_pid=573145 00:22:41.706 06:59:55 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:41.706 06:59:55 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.706 06:59:55 -- host/multicontroller.sh@47 -- # waitforlisten 573145 /var/tmp/bdevperf.sock 00:22:41.706 06:59:55 -- common/autotest_common.sh@819 -- # '[' -z 573145 ']' 00:22:41.706 06:59:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.706 06:59:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:41.706 06:59:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.706 06:59:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:41.706 06:59:55 -- common/autotest_common.sh@10 -- # set +x 00:22:42.639 06:59:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:42.639 06:59:56 -- common/autotest_common.sh@852 -- # return 0 00:22:42.639 06:59:56 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:42.639 06:59:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.639 06:59:56 -- common/autotest_common.sh@10 -- # set +x 00:22:42.897 NVMe0n1 00:22:42.897 06:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.897 06:59:57 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:42.897 06:59:57 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:42.897 06:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.897 06:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:42.897 06:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:42.897 1 00:22:42.897 06:59:57 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:42.897 06:59:57 -- common/autotest_common.sh@640 -- # local es=0 00:22:42.897 06:59:57 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:42.897 06:59:57 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:42.897 06:59:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:42.897 06:59:57 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:42.897 06:59:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:42.897 06:59:57 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:42.897 06:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.897 06:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:42.897 request: 00:22:42.897 { 00:22:42.897 "name": "NVMe0", 00:22:42.897 "trtype": "tcp", 00:22:42.897 "traddr": "10.0.0.2", 00:22:42.897 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:42.897 "hostaddr": "10.0.0.2", 00:22:42.897 "hostsvcid": "60000", 00:22:42.897 "adrfam": "ipv4", 00:22:42.897 "trsvcid": "4420", 00:22:42.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.897 "method": "bdev_nvme_attach_controller", 00:22:42.897 "req_id": 1 00:22:42.897 } 00:22:42.897 Got JSON-RPC error response 00:22:42.897 response: 00:22:42.897 { 00:22:42.897 "code": -114, 00:22:42.897 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:42.897 } 00:22:42.897 06:59:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:42.897 06:59:57 -- common/autotest_common.sh@643 -- # es=1 00:22:42.897 06:59:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:42.897 06:59:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:42.897 06:59:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:42.897 06:59:57 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:42.897 06:59:57 -- common/autotest_common.sh@640 -- # local es=0 00:22:42.898 06:59:57 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:42.898 06:59:57 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:42.898 06:59:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:42.898 06:59:57 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:42.898 06:59:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:42.898 06:59:57 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:42.898 06:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.898 06:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:42.898 request: 00:22:42.898 { 00:22:42.898 "name": "NVMe0", 00:22:42.898 "trtype": "tcp", 00:22:42.898 "traddr": "10.0.0.2", 00:22:42.898 "hostaddr": "10.0.0.2", 00:22:42.898 "hostsvcid": "60000", 00:22:42.898 "adrfam": "ipv4", 00:22:42.898 "trsvcid": "4420", 00:22:42.898 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:42.898 "method": "bdev_nvme_attach_controller", 00:22:42.898 "req_id": 1 00:22:42.898 } 00:22:42.898 Got JSON-RPC error response 00:22:42.898 response: 00:22:42.898 { 00:22:42.898 "code": -114, 00:22:42.898 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:42.898 } 00:22:42.898 06:59:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:42.898 06:59:57 -- common/autotest_common.sh@643 -- # es=1 00:22:42.898 06:59:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:42.898 06:59:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:42.898 06:59:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:42.898 06:59:57 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:42.898 06:59:57 -- common/autotest_common.sh@640 -- # local es=0 00:22:42.898 06:59:57 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:42.898 06:59:57 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:42.898 06:59:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:42.898 06:59:57 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:42.898 06:59:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:42.898 06:59:57 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:42.898 06:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.898 06:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:42.898 request: 00:22:42.898 { 00:22:42.898 "name": "NVMe0", 00:22:42.898 "trtype": "tcp", 00:22:42.898 "traddr": "10.0.0.2", 00:22:42.898 "hostaddr": "10.0.0.2", 00:22:42.898 "hostsvcid": "60000", 00:22:42.898 "adrfam": "ipv4", 00:22:42.898 "trsvcid": "4420", 00:22:42.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.898 "multipath": "disable", 00:22:42.898 "method": "bdev_nvme_attach_controller", 00:22:42.898 "req_id": 1 00:22:42.898 } 00:22:42.898 Got JSON-RPC error response 00:22:42.898 response: 00:22:42.898 { 00:22:42.898 "code": -114, 00:22:42.898 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:42.898 } 00:22:42.898 06:59:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:42.898 06:59:57 -- common/autotest_common.sh@643 -- # es=1 00:22:42.898 06:59:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:42.898 06:59:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:42.898 06:59:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:42.898 06:59:57 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:42.898 06:59:57 -- common/autotest_common.sh@640 -- # local es=0 00:22:42.898 06:59:57 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:42.898 06:59:57 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:42.898 06:59:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:42.898 06:59:57 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:42.898 06:59:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:42.898 06:59:57 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:42.898 06:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.898 06:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:42.898 request: 00:22:42.898 { 00:22:42.898 "name": "NVMe0", 00:22:42.898 "trtype": "tcp", 00:22:42.898 "traddr": "10.0.0.2", 00:22:42.898 "hostaddr": "10.0.0.2", 00:22:42.898 "hostsvcid": "60000", 00:22:42.898 "adrfam": "ipv4", 00:22:42.898 "trsvcid": "4420", 00:22:42.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.898 "multipath": "failover", 00:22:42.898 "method": "bdev_nvme_attach_controller", 00:22:42.898 "req_id": 1 00:22:42.898 } 00:22:42.898 Got JSON-RPC error response 00:22:42.898 response: 00:22:42.898 { 00:22:42.898 "code": -114, 00:22:42.898 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:42.898 } 00:22:42.898 06:59:57 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:42.898 06:59:57 -- common/autotest_common.sh@643 -- # es=1 00:22:42.898 06:59:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:42.898 06:59:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:42.898 06:59:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:42.898 06:59:57 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:42.898 06:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:42.898 06:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:43.156 00:22:43.156 06:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.156 06:59:57 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:43.156 06:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.156 06:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:43.156 06:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.156 06:59:57 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:43.156 06:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.156 06:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:43.414 00:22:43.414 06:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.414 06:59:57 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:43.414 06:59:57 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:43.414 06:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.414 06:59:57 -- common/autotest_common.sh@10 -- # set +x 00:22:43.414 06:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.414 06:59:57 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:43.414 06:59:57 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:44.789 0 00:22:44.789 06:59:58 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:44.789 06:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:44.789 06:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:44.789 06:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:44.789 06:59:58 -- host/multicontroller.sh@100 -- # killprocess 573145 00:22:44.789 06:59:58 -- common/autotest_common.sh@926 -- # '[' -z 573145 ']' 00:22:44.789 06:59:58 -- common/autotest_common.sh@930 -- # kill -0 573145 00:22:44.789 06:59:58 -- common/autotest_common.sh@931 -- # uname 00:22:44.789 06:59:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:44.789 06:59:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 573145 00:22:44.789 06:59:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:44.789 06:59:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:44.789 06:59:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 573145' 00:22:44.789 killing process with pid 573145 00:22:44.789 06:59:58 -- common/autotest_common.sh@945 -- # kill 573145 00:22:44.789 06:59:58 -- common/autotest_common.sh@950 -- # wait 573145 00:22:44.789 06:59:58 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.789 06:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:44.789 06:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:44.789 06:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:44.789 06:59:58 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:44.789 06:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:44.789 06:59:58 -- common/autotest_common.sh@10 -- # set +x 00:22:44.789 06:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:44.789 06:59:58 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:44.789 06:59:58 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:44.789 06:59:58 -- common/autotest_common.sh@1597 -- # read -r file 00:22:44.789 06:59:58 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:44.789 06:59:58 -- common/autotest_common.sh@1596 -- # sort -u 00:22:44.789 06:59:58 -- common/autotest_common.sh@1598 -- # cat 00:22:44.789 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:44.789 [2024-05-15 06:59:55.852234] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:44.789 [2024-05-15 06:59:55.852334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid573145 ] 00:22:44.789 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.789 [2024-05-15 06:59:55.924795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.789 [2024-05-15 06:59:56.031513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.789 [2024-05-15 06:59:57.512911] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 48843b65-b497-45f7-8c70-f7f7bdd74521 already exists 00:22:44.789 [2024-05-15 06:59:57.512975] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:48843b65-b497-45f7-8c70-f7f7bdd74521 alias for bdev NVMe1n1 00:22:44.789 [2024-05-15 06:59:57.512994] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:44.789 Running I/O for 1 seconds... 00:22:44.789 00:22:44.789 Latency(us) 00:22:44.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.789 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:44.789 NVMe0n1 : 1.01 19259.66 75.23 0.00 0.00 6626.93 4757.43 12621.75 00:22:44.789 =================================================================================================================== 00:22:44.789 Total : 19259.66 75.23 0.00 0.00 6626.93 4757.43 12621.75 00:22:44.789 Received shutdown signal, test time was about 1.000000 seconds 00:22:44.789 00:22:44.789 Latency(us) 00:22:44.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.790 =================================================================================================================== 00:22:44.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.790 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:44.790 06:59:58 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:44.790 06:59:58 -- common/autotest_common.sh@1597 -- # read -r file 00:22:44.790 06:59:58 -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:44.790 06:59:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:44.790 06:59:58 -- nvmf/common.sh@116 -- # sync 00:22:44.790 06:59:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:44.790 06:59:58 -- nvmf/common.sh@119 -- # set +e 00:22:44.790 06:59:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:44.790 06:59:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:44.790 rmmod nvme_tcp 00:22:44.790 rmmod nvme_fabrics 00:22:45.048 rmmod nvme_keyring 00:22:45.048 06:59:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:45.048 06:59:59 -- nvmf/common.sh@123 -- # set -e 00:22:45.048 06:59:59 -- nvmf/common.sh@124 -- # return 0 00:22:45.048 06:59:59 -- nvmf/common.sh@477 -- # '[' -n 572986 ']' 00:22:45.048 06:59:59 -- nvmf/common.sh@478 -- # killprocess 572986 00:22:45.048 06:59:59 -- common/autotest_common.sh@926 -- # '[' -z 572986 ']' 00:22:45.048 06:59:59 -- common/autotest_common.sh@930 -- # kill -0 572986 00:22:45.048 06:59:59 -- common/autotest_common.sh@931 -- # uname 00:22:45.048 06:59:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:45.048 06:59:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 572986 00:22:45.048 06:59:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:45.048 06:59:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:45.048 06:59:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 572986' 00:22:45.048 killing process with pid 572986 00:22:45.048 06:59:59 -- common/autotest_common.sh@945 -- # kill 572986 00:22:45.048 06:59:59 -- common/autotest_common.sh@950 -- # wait 572986 00:22:45.307 06:59:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:45.307 06:59:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:45.307 06:59:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:45.307 06:59:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.307 06:59:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:45.307 06:59:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.307 06:59:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.307 06:59:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.208 07:00:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:47.208 00:22:47.208 real 0m9.409s 00:22:47.208 user 0m17.148s 00:22:47.208 sys 0m2.823s 00:22:47.208 07:00:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.208 07:00:01 -- common/autotest_common.sh@10 -- # set +x 00:22:47.208 ************************************ 00:22:47.208 END TEST nvmf_multicontroller 00:22:47.208 ************************************ 00:22:47.208 07:00:01 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:47.208 07:00:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:47.208 07:00:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:47.208 07:00:01 -- common/autotest_common.sh@10 -- # set +x 00:22:47.208 ************************************ 00:22:47.208 START TEST nvmf_aer 00:22:47.208 ************************************ 00:22:47.208 07:00:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:47.465 * Looking for test storage... 00:22:47.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:47.465 07:00:01 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.465 07:00:01 -- nvmf/common.sh@7 -- # uname -s 00:22:47.465 07:00:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.465 07:00:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.465 07:00:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.465 07:00:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.465 07:00:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.465 07:00:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.465 07:00:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.465 07:00:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.465 07:00:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.465 07:00:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.465 07:00:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.465 07:00:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.465 07:00:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.465 07:00:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.465 07:00:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.465 07:00:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:47.465 07:00:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.465 07:00:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.465 07:00:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.466 07:00:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.466 07:00:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.466 07:00:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.466 07:00:01 -- paths/export.sh@5 -- # export PATH 00:22:47.466 07:00:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.466 07:00:01 -- nvmf/common.sh@46 -- # : 0 00:22:47.466 07:00:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:47.466 07:00:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:47.466 07:00:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:47.466 07:00:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.466 07:00:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.466 07:00:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:47.466 07:00:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:47.466 07:00:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:47.466 07:00:01 -- host/aer.sh@11 -- # nvmftestinit 00:22:47.466 07:00:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:47.466 07:00:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.466 07:00:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:47.466 07:00:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:47.466 07:00:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:47.466 07:00:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.466 07:00:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.466 07:00:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.466 07:00:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:47.466 07:00:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:47.466 07:00:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:47.466 07:00:01 -- common/autotest_common.sh@10 -- # set +x 00:22:49.989 07:00:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:49.989 07:00:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:49.989 07:00:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:49.989 07:00:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:49.989 07:00:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:49.989 07:00:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:49.989 07:00:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:49.989 07:00:04 -- nvmf/common.sh@294 -- # net_devs=() 00:22:49.989 07:00:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:49.989 07:00:04 -- nvmf/common.sh@295 -- # e810=() 00:22:49.989 07:00:04 -- nvmf/common.sh@295 -- # local -ga e810 00:22:49.989 07:00:04 -- nvmf/common.sh@296 -- # x722=() 00:22:49.989 07:00:04 -- nvmf/common.sh@296 -- # local -ga x722 00:22:49.989 07:00:04 -- nvmf/common.sh@297 -- # mlx=() 00:22:49.989 07:00:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:49.990 07:00:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.990 07:00:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.990 07:00:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.990 07:00:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.990 07:00:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.990 07:00:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.990 07:00:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.990 07:00:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.990 07:00:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.990 07:00:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.990 07:00:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.990 07:00:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:49.990 07:00:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:49.990 07:00:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:49.990 07:00:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:49.990 07:00:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:49.990 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:49.990 07:00:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:49.990 07:00:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:49.990 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:49.990 07:00:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:49.990 07:00:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:49.990 07:00:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.990 07:00:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:49.990 07:00:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.990 07:00:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:49.990 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:49.990 07:00:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.990 07:00:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:49.990 07:00:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.990 07:00:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:49.990 07:00:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.990 07:00:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:49.990 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:49.990 07:00:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.990 07:00:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:49.990 07:00:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:49.990 07:00:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:49.990 07:00:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.990 07:00:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.990 07:00:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.990 07:00:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:49.990 07:00:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.990 07:00:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.990 07:00:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:49.990 07:00:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.990 07:00:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.990 07:00:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:49.990 07:00:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:49.990 07:00:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.990 07:00:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.990 07:00:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.990 07:00:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.990 07:00:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:49.990 07:00:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.990 07:00:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.990 07:00:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.990 07:00:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:49.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:22:49.990 00:22:49.990 --- 10.0.0.2 ping statistics --- 00:22:49.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.990 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:22:49.990 07:00:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:22:49.990 00:22:49.990 --- 10.0.0.1 ping statistics --- 00:22:49.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.990 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:49.990 07:00:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.990 07:00:04 -- nvmf/common.sh@410 -- # return 0 00:22:49.990 07:00:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:49.990 07:00:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.990 07:00:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:49.990 07:00:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.990 07:00:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:49.990 07:00:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:49.990 07:00:04 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:49.990 07:00:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:49.990 07:00:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:49.990 07:00:04 -- common/autotest_common.sh@10 -- # set +x 00:22:49.990 07:00:04 -- nvmf/common.sh@469 -- # nvmfpid=576042 00:22:49.990 07:00:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:49.990 07:00:04 -- nvmf/common.sh@470 -- # waitforlisten 576042 00:22:49.990 07:00:04 -- common/autotest_common.sh@819 -- # '[' -z 576042 ']' 00:22:49.990 07:00:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.990 07:00:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:49.990 07:00:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.990 07:00:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:49.990 07:00:04 -- common/autotest_common.sh@10 -- # set +x 00:22:50.247 [2024-05-15 07:00:04.237761] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:50.247 [2024-05-15 07:00:04.237834] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.247 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.247 [2024-05-15 07:00:04.312820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:50.247 [2024-05-15 07:00:04.418695] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:50.247 [2024-05-15 07:00:04.418826] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.247 [2024-05-15 07:00:04.418842] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.247 [2024-05-15 07:00:04.418854] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.247 [2024-05-15 07:00:04.418903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.247 [2024-05-15 07:00:04.418960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.247 [2024-05-15 07:00:04.419025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:50.247 [2024-05-15 07:00:04.419029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.179 07:00:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:51.179 07:00:05 -- common/autotest_common.sh@852 -- # return 0 00:22:51.179 07:00:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:51.179 07:00:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:51.179 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.179 07:00:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.179 07:00:05 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:51.179 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.179 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.179 [2024-05-15 07:00:05.242607] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.179 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.179 07:00:05 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:51.179 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.179 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.179 Malloc0 00:22:51.179 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.179 07:00:05 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:51.179 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.179 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.179 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.179 07:00:05 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:51.179 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.179 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.179 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.179 07:00:05 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:51.179 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.179 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.179 [2024-05-15 07:00:05.294726] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.179 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.179 07:00:05 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:51.179 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.179 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.179 [2024-05-15 07:00:05.302482] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:51.179 [ 00:22:51.179 { 00:22:51.179 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:51.179 "subtype": "Discovery", 00:22:51.179 "listen_addresses": [], 00:22:51.179 "allow_any_host": true, 00:22:51.179 "hosts": [] 00:22:51.179 }, 00:22:51.179 { 00:22:51.179 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.179 "subtype": "NVMe", 00:22:51.179 "listen_addresses": [ 00:22:51.179 { 00:22:51.179 "transport": "TCP", 00:22:51.179 "trtype": "TCP", 00:22:51.179 "adrfam": "IPv4", 00:22:51.179 "traddr": "10.0.0.2", 00:22:51.179 "trsvcid": "4420" 00:22:51.179 } 00:22:51.179 ], 00:22:51.179 "allow_any_host": true, 00:22:51.179 "hosts": [], 00:22:51.179 "serial_number": "SPDK00000000000001", 00:22:51.179 "model_number": "SPDK bdev Controller", 00:22:51.179 "max_namespaces": 2, 00:22:51.179 "min_cntlid": 1, 00:22:51.179 "max_cntlid": 65519, 00:22:51.179 "namespaces": [ 00:22:51.179 { 00:22:51.179 "nsid": 1, 00:22:51.179 "bdev_name": "Malloc0", 00:22:51.179 "name": "Malloc0", 00:22:51.179 "nguid": "4BDF229E84884503809F330719A2A47C", 00:22:51.179 "uuid": "4bdf229e-8488-4503-809f-330719a2a47c" 00:22:51.179 } 00:22:51.179 ] 00:22:51.179 } 00:22:51.179 ] 00:22:51.179 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.179 07:00:05 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:51.179 07:00:05 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:51.179 07:00:05 -- host/aer.sh@33 -- # aerpid=576198 00:22:51.179 07:00:05 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:51.179 07:00:05 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:51.179 07:00:05 -- common/autotest_common.sh@1244 -- # local i=0 00:22:51.179 07:00:05 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:51.179 07:00:05 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:22:51.179 07:00:05 -- common/autotest_common.sh@1247 -- # i=1 00:22:51.179 07:00:05 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:51.179 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.437 07:00:05 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:51.437 07:00:05 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:22:51.437 07:00:05 -- common/autotest_common.sh@1247 -- # i=2 00:22:51.437 07:00:05 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:51.437 07:00:05 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:51.437 07:00:05 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:22:51.437 07:00:05 -- common/autotest_common.sh@1247 -- # i=3 00:22:51.437 07:00:05 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:51.437 07:00:05 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:51.437 07:00:05 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:51.437 07:00:05 -- common/autotest_common.sh@1255 -- # return 0 00:22:51.437 07:00:05 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:51.437 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.437 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.437 Malloc1 00:22:51.437 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.437 07:00:05 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:51.437 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.437 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.694 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.694 07:00:05 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:51.694 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.694 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.694 [ 00:22:51.694 { 00:22:51.694 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:51.694 "subtype": "Discovery", 00:22:51.694 "listen_addresses": [], 00:22:51.694 "allow_any_host": true, 00:22:51.694 "hosts": [] 00:22:51.694 }, 00:22:51.694 { 00:22:51.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.694 "subtype": "NVMe", 00:22:51.694 "listen_addresses": [ 00:22:51.694 { 00:22:51.694 "transport": "TCP", 00:22:51.694 "trtype": "TCP", 00:22:51.694 "adrfam": "IPv4", 00:22:51.694 "traddr": "10.0.0.2", 00:22:51.694 "trsvcid": "4420" 00:22:51.694 } 00:22:51.694 ], 00:22:51.694 "allow_any_host": true, 00:22:51.694 "hosts": [], 00:22:51.694 "serial_number": "SPDK00000000000001", 00:22:51.694 "model_number": "SPDK bdev Controller", 00:22:51.694 "max_namespaces": 2, 00:22:51.694 "min_cntlid": 1, 00:22:51.694 "max_cntlid": 65519, 00:22:51.694 "namespaces": [ 00:22:51.694 { 00:22:51.694 "nsid": 1, 00:22:51.694 "bdev_name": "Malloc0", 00:22:51.694 "name": "Malloc0", 00:22:51.694 "nguid": "4BDF229E84884503809F330719A2A47C", 00:22:51.694 "uuid": "4bdf229e-8488-4503-809f-330719a2a47c" 00:22:51.694 }, 00:22:51.694 { 00:22:51.694 "nsid": 2, 00:22:51.694 "bdev_name": "Malloc1", 00:22:51.694 "name": "Malloc1", 00:22:51.694 "nguid": "FDF2966AAE154F4D9183B162652454FA", 00:22:51.694 "uuid": "fdf2966a-ae15-4f4d-9183-b162652454fa" 00:22:51.694 } 00:22:51.694 ] 00:22:51.694 } 00:22:51.694 ] 00:22:51.694 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.694 07:00:05 -- host/aer.sh@43 -- # wait 576198 00:22:51.694 Asynchronous Event Request test 00:22:51.694 Attaching to 10.0.0.2 00:22:51.694 Attached to 10.0.0.2 00:22:51.694 Registering asynchronous event callbacks... 00:22:51.694 Starting namespace attribute notice tests for all controllers... 00:22:51.694 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:51.694 aer_cb - Changed Namespace 00:22:51.694 Cleaning up... 00:22:51.694 07:00:05 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:51.694 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.694 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.694 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.694 07:00:05 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:51.694 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.694 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.694 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.694 07:00:05 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.694 07:00:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.694 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:22:51.694 07:00:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.694 07:00:05 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:51.694 07:00:05 -- host/aer.sh@51 -- # nvmftestfini 00:22:51.694 07:00:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:51.694 07:00:05 -- nvmf/common.sh@116 -- # sync 00:22:51.694 07:00:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:51.694 07:00:05 -- nvmf/common.sh@119 -- # set +e 00:22:51.694 07:00:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:51.694 07:00:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:51.694 rmmod nvme_tcp 00:22:51.694 rmmod nvme_fabrics 00:22:51.694 rmmod nvme_keyring 00:22:51.694 07:00:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:51.694 07:00:05 -- nvmf/common.sh@123 -- # set -e 00:22:51.694 07:00:05 -- nvmf/common.sh@124 -- # return 0 00:22:51.694 07:00:05 -- nvmf/common.sh@477 -- # '[' -n 576042 ']' 00:22:51.694 07:00:05 -- nvmf/common.sh@478 -- # killprocess 576042 00:22:51.694 07:00:05 -- common/autotest_common.sh@926 -- # '[' -z 576042 ']' 00:22:51.694 07:00:05 -- common/autotest_common.sh@930 -- # kill -0 576042 00:22:51.694 07:00:05 -- common/autotest_common.sh@931 -- # uname 00:22:51.694 07:00:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:51.694 07:00:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 576042 00:22:51.694 07:00:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:51.694 07:00:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:51.694 07:00:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 576042' 00:22:51.694 killing process with pid 576042 00:22:51.694 07:00:05 -- common/autotest_common.sh@945 -- # kill 576042 00:22:51.694 [2024-05-15 07:00:05.866424] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:51.694 07:00:05 -- common/autotest_common.sh@950 -- # wait 576042 00:22:51.952 07:00:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:51.952 07:00:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:51.952 07:00:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:51.952 07:00:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.952 07:00:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:51.952 07:00:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.952 07:00:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.952 07:00:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.479 07:00:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:54.479 00:22:54.479 real 0m6.726s 00:22:54.479 user 0m7.748s 00:22:54.479 sys 0m2.308s 00:22:54.479 07:00:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:54.479 07:00:08 -- common/autotest_common.sh@10 -- # set +x 00:22:54.479 ************************************ 00:22:54.479 END TEST nvmf_aer 00:22:54.479 ************************************ 00:22:54.479 07:00:08 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:54.479 07:00:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:54.479 07:00:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:54.479 07:00:08 -- common/autotest_common.sh@10 -- # set +x 00:22:54.479 ************************************ 00:22:54.479 START TEST nvmf_async_init 00:22:54.479 ************************************ 00:22:54.479 07:00:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:54.479 * Looking for test storage... 00:22:54.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:54.479 07:00:08 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.479 07:00:08 -- nvmf/common.sh@7 -- # uname -s 00:22:54.479 07:00:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.479 07:00:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.479 07:00:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.479 07:00:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.479 07:00:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.479 07:00:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.479 07:00:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.479 07:00:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.479 07:00:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.479 07:00:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.479 07:00:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.479 07:00:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.479 07:00:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.479 07:00:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.479 07:00:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.479 07:00:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.479 07:00:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.479 07:00:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.479 07:00:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.479 07:00:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.479 07:00:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.479 07:00:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.479 07:00:08 -- paths/export.sh@5 -- # export PATH 00:22:54.479 07:00:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.479 07:00:08 -- nvmf/common.sh@46 -- # : 0 00:22:54.479 07:00:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:54.479 07:00:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:54.479 07:00:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:54.479 07:00:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.479 07:00:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.479 07:00:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:54.479 07:00:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:54.479 07:00:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:54.479 07:00:08 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:54.479 07:00:08 -- host/async_init.sh@14 -- # null_block_size=512 00:22:54.479 07:00:08 -- host/async_init.sh@15 -- # null_bdev=null0 00:22:54.479 07:00:08 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:54.479 07:00:08 -- host/async_init.sh@20 -- # uuidgen 00:22:54.479 07:00:08 -- host/async_init.sh@20 -- # tr -d - 00:22:54.479 07:00:08 -- host/async_init.sh@20 -- # nguid=0650169dbce545b68782e1f8d4377c26 00:22:54.479 07:00:08 -- host/async_init.sh@22 -- # nvmftestinit 00:22:54.479 07:00:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:54.479 07:00:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.479 07:00:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:54.479 07:00:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:54.479 07:00:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:54.479 07:00:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.479 07:00:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.479 07:00:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.479 07:00:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:54.479 07:00:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:54.479 07:00:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:54.479 07:00:08 -- common/autotest_common.sh@10 -- # set +x 00:22:57.030 07:00:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:57.030 07:00:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:57.030 07:00:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:57.030 07:00:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:57.030 07:00:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:57.030 07:00:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:57.030 07:00:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:57.030 07:00:10 -- nvmf/common.sh@294 -- # net_devs=() 00:22:57.030 07:00:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:57.030 07:00:10 -- nvmf/common.sh@295 -- # e810=() 00:22:57.030 07:00:10 -- nvmf/common.sh@295 -- # local -ga e810 00:22:57.030 07:00:10 -- nvmf/common.sh@296 -- # x722=() 00:22:57.030 07:00:10 -- nvmf/common.sh@296 -- # local -ga x722 00:22:57.030 07:00:10 -- nvmf/common.sh@297 -- # mlx=() 00:22:57.030 07:00:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:57.030 07:00:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.030 07:00:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.030 07:00:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.030 07:00:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.030 07:00:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.030 07:00:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.030 07:00:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.030 07:00:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.030 07:00:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.030 07:00:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.030 07:00:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.030 07:00:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:57.030 07:00:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:57.030 07:00:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:57.030 07:00:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:57.030 07:00:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:57.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:57.030 07:00:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:57.030 07:00:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:57.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:57.030 07:00:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:57.030 07:00:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:57.030 07:00:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.030 07:00:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:57.030 07:00:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.030 07:00:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:57.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:57.030 07:00:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.030 07:00:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:57.030 07:00:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.030 07:00:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:57.030 07:00:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.030 07:00:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:57.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:57.030 07:00:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.030 07:00:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:57.030 07:00:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:57.030 07:00:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:57.030 07:00:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.030 07:00:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.030 07:00:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.030 07:00:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:57.030 07:00:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.030 07:00:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.030 07:00:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:57.030 07:00:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.030 07:00:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.030 07:00:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:57.030 07:00:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:57.030 07:00:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.030 07:00:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.030 07:00:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.030 07:00:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.030 07:00:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:57.030 07:00:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.030 07:00:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.030 07:00:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.030 07:00:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:57.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:22:57.030 00:22:57.030 --- 10.0.0.2 ping statistics --- 00:22:57.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.030 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:22:57.030 07:00:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:22:57.030 00:22:57.030 --- 10.0.0.1 ping statistics --- 00:22:57.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.030 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:22:57.030 07:00:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.030 07:00:10 -- nvmf/common.sh@410 -- # return 0 00:22:57.030 07:00:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:57.030 07:00:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.030 07:00:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:57.030 07:00:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.030 07:00:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:57.030 07:00:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:57.030 07:00:10 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:57.031 07:00:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:57.031 07:00:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:57.031 07:00:10 -- common/autotest_common.sh@10 -- # set +x 00:22:57.031 07:00:10 -- nvmf/common.sh@469 -- # nvmfpid=578950 00:22:57.031 07:00:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:57.031 07:00:10 -- nvmf/common.sh@470 -- # waitforlisten 578950 00:22:57.031 07:00:10 -- common/autotest_common.sh@819 -- # '[' -z 578950 ']' 00:22:57.031 07:00:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.031 07:00:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:57.031 07:00:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.031 07:00:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:57.031 07:00:10 -- common/autotest_common.sh@10 -- # set +x 00:22:57.031 [2024-05-15 07:00:10.857446] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:22:57.031 [2024-05-15 07:00:10.857535] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.031 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.031 [2024-05-15 07:00:10.943148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.031 [2024-05-15 07:00:11.062060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:57.031 [2024-05-15 07:00:11.062196] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.031 [2024-05-15 07:00:11.062214] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.031 [2024-05-15 07:00:11.062243] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.031 [2024-05-15 07:00:11.062272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.596 07:00:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:57.596 07:00:11 -- common/autotest_common.sh@852 -- # return 0 00:22:57.596 07:00:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:57.596 07:00:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:57.596 07:00:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.596 07:00:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.596 07:00:11 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:57.596 07:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.596 07:00:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.596 [2024-05-15 07:00:11.826747] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.596 07:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.596 07:00:11 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:57.596 07:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.853 07:00:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.853 null0 00:22:57.853 07:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.853 07:00:11 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:57.853 07:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.853 07:00:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.853 07:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.853 07:00:11 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:57.853 07:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.853 07:00:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.853 07:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.853 07:00:11 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0650169dbce545b68782e1f8d4377c26 00:22:57.853 07:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.853 07:00:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.853 07:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.853 07:00:11 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:57.853 07:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.853 07:00:11 -- common/autotest_common.sh@10 -- # set +x 00:22:57.853 [2024-05-15 07:00:11.867010] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.853 07:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:57.853 07:00:11 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:57.853 07:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:57.853 07:00:11 -- common/autotest_common.sh@10 -- # set +x 00:22:58.110 nvme0n1 00:22:58.110 07:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.110 07:00:12 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:58.110 07:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.110 07:00:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.110 [ 00:22:58.110 { 00:22:58.110 "name": "nvme0n1", 00:22:58.110 "aliases": [ 00:22:58.110 "0650169d-bce5-45b6-8782-e1f8d4377c26" 00:22:58.110 ], 00:22:58.110 "product_name": "NVMe disk", 00:22:58.110 "block_size": 512, 00:22:58.110 "num_blocks": 2097152, 00:22:58.110 "uuid": "0650169d-bce5-45b6-8782-e1f8d4377c26", 00:22:58.110 "assigned_rate_limits": { 00:22:58.110 "rw_ios_per_sec": 0, 00:22:58.110 "rw_mbytes_per_sec": 0, 00:22:58.110 "r_mbytes_per_sec": 0, 00:22:58.110 "w_mbytes_per_sec": 0 00:22:58.110 }, 00:22:58.110 "claimed": false, 00:22:58.110 "zoned": false, 00:22:58.111 "supported_io_types": { 00:22:58.111 "read": true, 00:22:58.111 "write": true, 00:22:58.111 "unmap": false, 00:22:58.111 "write_zeroes": true, 00:22:58.111 "flush": true, 00:22:58.111 "reset": true, 00:22:58.111 "compare": true, 00:22:58.111 "compare_and_write": true, 00:22:58.111 "abort": true, 00:22:58.111 "nvme_admin": true, 00:22:58.111 "nvme_io": true 00:22:58.111 }, 00:22:58.111 "driver_specific": { 00:22:58.111 "nvme": [ 00:22:58.111 { 00:22:58.111 "trid": { 00:22:58.111 "trtype": "TCP", 00:22:58.111 "adrfam": "IPv4", 00:22:58.111 "traddr": "10.0.0.2", 00:22:58.111 "trsvcid": "4420", 00:22:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:58.111 }, 00:22:58.111 "ctrlr_data": { 00:22:58.111 "cntlid": 1, 00:22:58.111 "vendor_id": "0x8086", 00:22:58.111 "model_number": "SPDK bdev Controller", 00:22:58.111 "serial_number": "00000000000000000000", 00:22:58.111 "firmware_revision": "24.01.1", 00:22:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:58.111 "oacs": { 00:22:58.111 "security": 0, 00:22:58.111 "format": 0, 00:22:58.111 "firmware": 0, 00:22:58.111 "ns_manage": 0 00:22:58.111 }, 00:22:58.111 "multi_ctrlr": true, 00:22:58.111 "ana_reporting": false 00:22:58.111 }, 00:22:58.111 "vs": { 00:22:58.111 "nvme_version": "1.3" 00:22:58.111 }, 00:22:58.111 "ns_data": { 00:22:58.111 "id": 1, 00:22:58.111 "can_share": true 00:22:58.111 } 00:22:58.111 } 00:22:58.111 ], 00:22:58.111 "mp_policy": "active_passive" 00:22:58.111 } 00:22:58.111 } 00:22:58.111 ] 00:22:58.111 07:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.111 07:00:12 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:58.111 07:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.111 07:00:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.111 [2024-05-15 07:00:12.115645] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:58.111 [2024-05-15 07:00:12.115745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18559d0 (9): Bad file descriptor 00:22:58.111 [2024-05-15 07:00:12.248095] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:58.111 07:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.111 07:00:12 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:58.111 07:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.111 07:00:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.111 [ 00:22:58.111 { 00:22:58.111 "name": "nvme0n1", 00:22:58.111 "aliases": [ 00:22:58.111 "0650169d-bce5-45b6-8782-e1f8d4377c26" 00:22:58.111 ], 00:22:58.111 "product_name": "NVMe disk", 00:22:58.111 "block_size": 512, 00:22:58.111 "num_blocks": 2097152, 00:22:58.111 "uuid": "0650169d-bce5-45b6-8782-e1f8d4377c26", 00:22:58.111 "assigned_rate_limits": { 00:22:58.111 "rw_ios_per_sec": 0, 00:22:58.111 "rw_mbytes_per_sec": 0, 00:22:58.111 "r_mbytes_per_sec": 0, 00:22:58.111 "w_mbytes_per_sec": 0 00:22:58.111 }, 00:22:58.111 "claimed": false, 00:22:58.111 "zoned": false, 00:22:58.111 "supported_io_types": { 00:22:58.111 "read": true, 00:22:58.111 "write": true, 00:22:58.111 "unmap": false, 00:22:58.111 "write_zeroes": true, 00:22:58.111 "flush": true, 00:22:58.111 "reset": true, 00:22:58.111 "compare": true, 00:22:58.111 "compare_and_write": true, 00:22:58.111 "abort": true, 00:22:58.111 "nvme_admin": true, 00:22:58.111 "nvme_io": true 00:22:58.111 }, 00:22:58.111 "driver_specific": { 00:22:58.111 "nvme": [ 00:22:58.111 { 00:22:58.111 "trid": { 00:22:58.111 "trtype": "TCP", 00:22:58.111 "adrfam": "IPv4", 00:22:58.111 "traddr": "10.0.0.2", 00:22:58.111 "trsvcid": "4420", 00:22:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:58.111 }, 00:22:58.111 "ctrlr_data": { 00:22:58.111 "cntlid": 2, 00:22:58.111 "vendor_id": "0x8086", 00:22:58.111 "model_number": "SPDK bdev Controller", 00:22:58.111 "serial_number": "00000000000000000000", 00:22:58.111 "firmware_revision": "24.01.1", 00:22:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:58.111 "oacs": { 00:22:58.111 "security": 0, 00:22:58.111 "format": 0, 00:22:58.111 "firmware": 0, 00:22:58.111 "ns_manage": 0 00:22:58.111 }, 00:22:58.111 "multi_ctrlr": true, 00:22:58.111 "ana_reporting": false 00:22:58.111 }, 00:22:58.111 "vs": { 00:22:58.111 "nvme_version": "1.3" 00:22:58.111 }, 00:22:58.111 "ns_data": { 00:22:58.111 "id": 1, 00:22:58.111 "can_share": true 00:22:58.111 } 00:22:58.111 } 00:22:58.111 ], 00:22:58.111 "mp_policy": "active_passive" 00:22:58.111 } 00:22:58.111 } 00:22:58.111 ] 00:22:58.111 07:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.111 07:00:12 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.111 07:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.111 07:00:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.111 07:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.111 07:00:12 -- host/async_init.sh@53 -- # mktemp 00:22:58.111 07:00:12 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.s3fDTbVsDg 00:22:58.111 07:00:12 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:58.111 07:00:12 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.s3fDTbVsDg 00:22:58.111 07:00:12 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:58.111 07:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.111 07:00:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.111 07:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.111 07:00:12 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:58.111 07:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.111 07:00:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.111 [2024-05-15 07:00:12.296274] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:58.111 [2024-05-15 07:00:12.296408] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:58.111 07:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.111 07:00:12 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s3fDTbVsDg 00:22:58.111 07:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.111 07:00:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.111 07:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.111 07:00:12 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.s3fDTbVsDg 00:22:58.111 07:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.111 07:00:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.111 [2024-05-15 07:00:12.312305] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.368 nvme0n1 00:22:58.368 07:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.368 07:00:12 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:58.368 07:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.368 07:00:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.368 [ 00:22:58.368 { 00:22:58.368 "name": "nvme0n1", 00:22:58.368 "aliases": [ 00:22:58.368 "0650169d-bce5-45b6-8782-e1f8d4377c26" 00:22:58.368 ], 00:22:58.368 "product_name": "NVMe disk", 00:22:58.368 "block_size": 512, 00:22:58.368 "num_blocks": 2097152, 00:22:58.368 "uuid": "0650169d-bce5-45b6-8782-e1f8d4377c26", 00:22:58.368 "assigned_rate_limits": { 00:22:58.368 "rw_ios_per_sec": 0, 00:22:58.368 "rw_mbytes_per_sec": 0, 00:22:58.368 "r_mbytes_per_sec": 0, 00:22:58.368 "w_mbytes_per_sec": 0 00:22:58.368 }, 00:22:58.368 "claimed": false, 00:22:58.368 "zoned": false, 00:22:58.368 "supported_io_types": { 00:22:58.368 "read": true, 00:22:58.368 "write": true, 00:22:58.369 "unmap": false, 00:22:58.369 "write_zeroes": true, 00:22:58.369 "flush": true, 00:22:58.369 "reset": true, 00:22:58.369 "compare": true, 00:22:58.369 "compare_and_write": true, 00:22:58.369 "abort": true, 00:22:58.369 "nvme_admin": true, 00:22:58.369 "nvme_io": true 00:22:58.369 }, 00:22:58.369 "driver_specific": { 00:22:58.369 "nvme": [ 00:22:58.369 { 00:22:58.369 "trid": { 00:22:58.369 "trtype": "TCP", 00:22:58.369 "adrfam": "IPv4", 00:22:58.369 "traddr": "10.0.0.2", 00:22:58.369 "trsvcid": "4421", 00:22:58.369 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:58.369 }, 00:22:58.369 "ctrlr_data": { 00:22:58.369 "cntlid": 3, 00:22:58.369 "vendor_id": "0x8086", 00:22:58.369 "model_number": "SPDK bdev Controller", 00:22:58.369 "serial_number": "00000000000000000000", 00:22:58.369 "firmware_revision": "24.01.1", 00:22:58.369 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:58.369 "oacs": { 00:22:58.369 "security": 0, 00:22:58.369 "format": 0, 00:22:58.369 "firmware": 0, 00:22:58.369 "ns_manage": 0 00:22:58.369 }, 00:22:58.369 "multi_ctrlr": true, 00:22:58.369 "ana_reporting": false 00:22:58.369 }, 00:22:58.369 "vs": { 00:22:58.369 "nvme_version": "1.3" 00:22:58.369 }, 00:22:58.369 "ns_data": { 00:22:58.369 "id": 1, 00:22:58.369 "can_share": true 00:22:58.369 } 00:22:58.369 } 00:22:58.369 ], 00:22:58.369 "mp_policy": "active_passive" 00:22:58.369 } 00:22:58.369 } 00:22:58.369 ] 00:22:58.369 07:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.369 07:00:12 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.369 07:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.369 07:00:12 -- common/autotest_common.sh@10 -- # set +x 00:22:58.369 07:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.369 07:00:12 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.s3fDTbVsDg 00:22:58.369 07:00:12 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:58.369 07:00:12 -- host/async_init.sh@78 -- # nvmftestfini 00:22:58.369 07:00:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:58.369 07:00:12 -- nvmf/common.sh@116 -- # sync 00:22:58.369 07:00:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:58.369 07:00:12 -- nvmf/common.sh@119 -- # set +e 00:22:58.369 07:00:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:58.369 07:00:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:58.369 rmmod nvme_tcp 00:22:58.369 rmmod nvme_fabrics 00:22:58.369 rmmod nvme_keyring 00:22:58.369 07:00:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:58.369 07:00:12 -- nvmf/common.sh@123 -- # set -e 00:22:58.369 07:00:12 -- nvmf/common.sh@124 -- # return 0 00:22:58.369 07:00:12 -- nvmf/common.sh@477 -- # '[' -n 578950 ']' 00:22:58.369 07:00:12 -- nvmf/common.sh@478 -- # killprocess 578950 00:22:58.369 07:00:12 -- common/autotest_common.sh@926 -- # '[' -z 578950 ']' 00:22:58.369 07:00:12 -- common/autotest_common.sh@930 -- # kill -0 578950 00:22:58.369 07:00:12 -- common/autotest_common.sh@931 -- # uname 00:22:58.369 07:00:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:58.369 07:00:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 578950 00:22:58.369 07:00:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:58.369 07:00:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:58.369 07:00:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 578950' 00:22:58.369 killing process with pid 578950 00:22:58.369 07:00:12 -- common/autotest_common.sh@945 -- # kill 578950 00:22:58.369 07:00:12 -- common/autotest_common.sh@950 -- # wait 578950 00:22:58.626 07:00:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:58.626 07:00:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:58.626 07:00:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:58.626 07:00:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:58.626 07:00:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:58.626 07:00:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.626 07:00:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.626 07:00:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.156 07:00:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:01.156 00:23:01.156 real 0m6.614s 00:23:01.156 user 0m3.124s 00:23:01.156 sys 0m2.100s 00:23:01.156 07:00:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:01.156 07:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:01.156 ************************************ 00:23:01.156 END TEST nvmf_async_init 00:23:01.156 ************************************ 00:23:01.156 07:00:14 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:01.156 07:00:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:01.156 07:00:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:01.156 07:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:01.156 ************************************ 00:23:01.156 START TEST dma 00:23:01.156 ************************************ 00:23:01.156 07:00:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:01.156 * Looking for test storage... 00:23:01.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:01.156 07:00:14 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.156 07:00:14 -- nvmf/common.sh@7 -- # uname -s 00:23:01.156 07:00:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.156 07:00:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.156 07:00:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.156 07:00:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.156 07:00:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.156 07:00:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.156 07:00:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.156 07:00:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.156 07:00:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.156 07:00:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.156 07:00:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:01.156 07:00:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:01.156 07:00:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.156 07:00:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.156 07:00:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.156 07:00:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.156 07:00:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.157 07:00:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.157 07:00:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.157 07:00:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.157 07:00:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.157 07:00:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.157 07:00:14 -- paths/export.sh@5 -- # export PATH 00:23:01.157 07:00:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.157 07:00:14 -- nvmf/common.sh@46 -- # : 0 00:23:01.157 07:00:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:01.157 07:00:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:01.157 07:00:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:01.157 07:00:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.157 07:00:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.157 07:00:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:01.157 07:00:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:01.157 07:00:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:01.157 07:00:14 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:01.157 07:00:14 -- host/dma.sh@13 -- # exit 0 00:23:01.157 00:23:01.157 real 0m0.062s 00:23:01.157 user 0m0.034s 00:23:01.157 sys 0m0.033s 00:23:01.157 07:00:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:01.157 07:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:01.157 ************************************ 00:23:01.157 END TEST dma 00:23:01.157 ************************************ 00:23:01.157 07:00:14 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:01.157 07:00:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:01.157 07:00:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:01.157 07:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:01.157 ************************************ 00:23:01.157 START TEST nvmf_identify 00:23:01.157 ************************************ 00:23:01.157 07:00:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:01.157 * Looking for test storage... 00:23:01.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:01.157 07:00:14 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.157 07:00:14 -- nvmf/common.sh@7 -- # uname -s 00:23:01.157 07:00:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.157 07:00:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.157 07:00:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.157 07:00:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.157 07:00:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.157 07:00:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.157 07:00:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.157 07:00:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.157 07:00:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.157 07:00:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.157 07:00:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:01.157 07:00:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:01.157 07:00:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.157 07:00:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.157 07:00:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.157 07:00:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.157 07:00:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.157 07:00:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.157 07:00:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.157 07:00:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.157 07:00:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.157 07:00:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.157 07:00:14 -- paths/export.sh@5 -- # export PATH 00:23:01.157 07:00:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.157 07:00:14 -- nvmf/common.sh@46 -- # : 0 00:23:01.157 07:00:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:01.157 07:00:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:01.157 07:00:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:01.157 07:00:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.157 07:00:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.157 07:00:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:01.157 07:00:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:01.157 07:00:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:01.157 07:00:14 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:01.157 07:00:14 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:01.157 07:00:14 -- host/identify.sh@14 -- # nvmftestinit 00:23:01.157 07:00:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:01.157 07:00:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.157 07:00:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:01.157 07:00:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:01.157 07:00:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:01.157 07:00:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.157 07:00:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:01.157 07:00:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.157 07:00:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:01.157 07:00:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:01.157 07:00:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:01.157 07:00:14 -- common/autotest_common.sh@10 -- # set +x 00:23:03.687 07:00:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:03.687 07:00:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:03.687 07:00:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:03.687 07:00:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:03.687 07:00:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:03.687 07:00:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:03.687 07:00:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:03.687 07:00:17 -- nvmf/common.sh@294 -- # net_devs=() 00:23:03.687 07:00:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:03.687 07:00:17 -- nvmf/common.sh@295 -- # e810=() 00:23:03.687 07:00:17 -- nvmf/common.sh@295 -- # local -ga e810 00:23:03.687 07:00:17 -- nvmf/common.sh@296 -- # x722=() 00:23:03.687 07:00:17 -- nvmf/common.sh@296 -- # local -ga x722 00:23:03.687 07:00:17 -- nvmf/common.sh@297 -- # mlx=() 00:23:03.687 07:00:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:03.687 07:00:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.687 07:00:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.687 07:00:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.687 07:00:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.687 07:00:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.687 07:00:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.687 07:00:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.687 07:00:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.688 07:00:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.688 07:00:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.688 07:00:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.688 07:00:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:03.688 07:00:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:03.688 07:00:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:03.688 07:00:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:03.688 07:00:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:03.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:03.688 07:00:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:03.688 07:00:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:03.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:03.688 07:00:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:03.688 07:00:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:03.688 07:00:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.688 07:00:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:03.688 07:00:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.688 07:00:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:03.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:03.688 07:00:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.688 07:00:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:03.688 07:00:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.688 07:00:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:03.688 07:00:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.688 07:00:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:03.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:03.688 07:00:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.688 07:00:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:03.688 07:00:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:03.688 07:00:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:03.688 07:00:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.688 07:00:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.688 07:00:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.688 07:00:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:03.688 07:00:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.688 07:00:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.688 07:00:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:03.688 07:00:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.688 07:00:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.688 07:00:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:03.688 07:00:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:03.688 07:00:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.688 07:00:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.688 07:00:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.688 07:00:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.688 07:00:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:03.688 07:00:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.688 07:00:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.688 07:00:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.688 07:00:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:03.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:23:03.688 00:23:03.688 --- 10.0.0.2 ping statistics --- 00:23:03.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.688 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:23:03.688 07:00:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:23:03.688 00:23:03.688 --- 10.0.0.1 ping statistics --- 00:23:03.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.688 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:03.688 07:00:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.688 07:00:17 -- nvmf/common.sh@410 -- # return 0 00:23:03.688 07:00:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:03.688 07:00:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.688 07:00:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:03.688 07:00:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.688 07:00:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:03.688 07:00:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:03.688 07:00:17 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:03.688 07:00:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:03.688 07:00:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.688 07:00:17 -- host/identify.sh@19 -- # nvmfpid=581515 00:23:03.688 07:00:17 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:03.688 07:00:17 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.688 07:00:17 -- host/identify.sh@23 -- # waitforlisten 581515 00:23:03.688 07:00:17 -- common/autotest_common.sh@819 -- # '[' -z 581515 ']' 00:23:03.688 07:00:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.688 07:00:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:03.688 07:00:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.688 07:00:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:03.688 07:00:17 -- common/autotest_common.sh@10 -- # set +x 00:23:03.688 [2024-05-15 07:00:17.585143] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:03.688 [2024-05-15 07:00:17.585235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.688 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.688 [2024-05-15 07:00:17.671841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:03.688 [2024-05-15 07:00:17.790693] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:03.688 [2024-05-15 07:00:17.790840] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.688 [2024-05-15 07:00:17.790857] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.688 [2024-05-15 07:00:17.790869] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.688 [2024-05-15 07:00:17.793957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.688 [2024-05-15 07:00:17.794017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.688 [2024-05-15 07:00:17.794110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:03.688 [2024-05-15 07:00:17.794113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.624 07:00:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:04.625 07:00:18 -- common/autotest_common.sh@852 -- # return 0 00:23:04.625 07:00:18 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:04.625 07:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.625 07:00:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.625 [2024-05-15 07:00:18.557355] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.625 07:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.625 07:00:18 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:04.625 07:00:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:04.625 07:00:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.625 07:00:18 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:04.625 07:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.625 07:00:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.625 Malloc0 00:23:04.625 07:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.625 07:00:18 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:04.625 07:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.625 07:00:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.625 07:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.625 07:00:18 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:04.625 07:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.625 07:00:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.625 07:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.625 07:00:18 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:04.625 07:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.625 07:00:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.625 [2024-05-15 07:00:18.628365] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.625 07:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.625 07:00:18 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:04.625 07:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.625 07:00:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.625 07:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.625 07:00:18 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:04.625 07:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.625 07:00:18 -- common/autotest_common.sh@10 -- # set +x 00:23:04.625 [2024-05-15 07:00:18.644131] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:04.625 [ 00:23:04.625 { 00:23:04.625 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:04.625 "subtype": "Discovery", 00:23:04.625 "listen_addresses": [ 00:23:04.625 { 00:23:04.625 "transport": "TCP", 00:23:04.625 "trtype": "TCP", 00:23:04.625 "adrfam": "IPv4", 00:23:04.625 "traddr": "10.0.0.2", 00:23:04.625 "trsvcid": "4420" 00:23:04.625 } 00:23:04.625 ], 00:23:04.625 "allow_any_host": true, 00:23:04.625 "hosts": [] 00:23:04.625 }, 00:23:04.625 { 00:23:04.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.625 "subtype": "NVMe", 00:23:04.625 "listen_addresses": [ 00:23:04.625 { 00:23:04.625 "transport": "TCP", 00:23:04.625 "trtype": "TCP", 00:23:04.625 "adrfam": "IPv4", 00:23:04.625 "traddr": "10.0.0.2", 00:23:04.625 "trsvcid": "4420" 00:23:04.625 } 00:23:04.625 ], 00:23:04.625 "allow_any_host": true, 00:23:04.625 "hosts": [], 00:23:04.625 "serial_number": "SPDK00000000000001", 00:23:04.625 "model_number": "SPDK bdev Controller", 00:23:04.625 "max_namespaces": 32, 00:23:04.625 "min_cntlid": 1, 00:23:04.625 "max_cntlid": 65519, 00:23:04.625 "namespaces": [ 00:23:04.625 { 00:23:04.625 "nsid": 1, 00:23:04.625 "bdev_name": "Malloc0", 00:23:04.625 "name": "Malloc0", 00:23:04.625 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:04.625 "eui64": "ABCDEF0123456789", 00:23:04.625 "uuid": "6d40a62e-0554-40d3-acdf-fe126c2226cc" 00:23:04.625 } 00:23:04.625 ] 00:23:04.625 } 00:23:04.625 ] 00:23:04.625 07:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.625 07:00:18 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:04.625 [2024-05-15 07:00:18.667462] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:04.625 [2024-05-15 07:00:18.667509] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid581673 ] 00:23:04.625 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.625 [2024-05-15 07:00:18.700343] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:04.625 [2024-05-15 07:00:18.700409] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:04.625 [2024-05-15 07:00:18.700419] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:04.625 [2024-05-15 07:00:18.700434] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:04.625 [2024-05-15 07:00:18.700447] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:04.625 [2024-05-15 07:00:18.703984] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:04.625 [2024-05-15 07:00:18.704036] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1cede10 0 00:23:04.625 [2024-05-15 07:00:18.711005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:04.625 [2024-05-15 07:00:18.711026] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:04.625 [2024-05-15 07:00:18.711035] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:04.625 [2024-05-15 07:00:18.711042] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:04.625 [2024-05-15 07:00:18.711093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.711106] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.711114] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cede10) 00:23:04.625 [2024-05-15 07:00:18.711131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:04.625 [2024-05-15 07:00:18.711158] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dbf0, cid 0, qid 0 00:23:04.625 [2024-05-15 07:00:18.717943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.625 [2024-05-15 07:00:18.717961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.625 [2024-05-15 07:00:18.717973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.717981] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dbf0) on tqpair=0x1cede10 00:23:04.625 [2024-05-15 07:00:18.717999] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:04.625 [2024-05-15 07:00:18.718010] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:04.625 [2024-05-15 07:00:18.718019] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:04.625 [2024-05-15 07:00:18.718042] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.718052] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.718059] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cede10) 00:23:04.625 [2024-05-15 07:00:18.718071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.625 [2024-05-15 07:00:18.718094] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dbf0, cid 0, qid 0 00:23:04.625 [2024-05-15 07:00:18.718296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.625 [2024-05-15 07:00:18.718312] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.625 [2024-05-15 07:00:18.718320] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.718327] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dbf0) on tqpair=0x1cede10 00:23:04.625 [2024-05-15 07:00:18.718342] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:04.625 [2024-05-15 07:00:18.718357] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:04.625 [2024-05-15 07:00:18.718370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.718378] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.718385] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cede10) 00:23:04.625 [2024-05-15 07:00:18.718401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.625 [2024-05-15 07:00:18.718423] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dbf0, cid 0, qid 0 00:23:04.625 [2024-05-15 07:00:18.718639] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.625 [2024-05-15 07:00:18.718655] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.625 [2024-05-15 07:00:18.718662] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.718670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dbf0) on tqpair=0x1cede10 00:23:04.625 [2024-05-15 07:00:18.718681] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:04.625 [2024-05-15 07:00:18.718696] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:04.625 [2024-05-15 07:00:18.718708] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.718716] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.718723] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cede10) 00:23:04.625 [2024-05-15 07:00:18.718733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.625 [2024-05-15 07:00:18.718754] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dbf0, cid 0, qid 0 00:23:04.625 [2024-05-15 07:00:18.718978] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.625 [2024-05-15 07:00:18.718995] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.625 [2024-05-15 07:00:18.719002] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.625 [2024-05-15 07:00:18.719009] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dbf0) on tqpair=0x1cede10 00:23:04.626 [2024-05-15 07:00:18.719020] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:04.626 [2024-05-15 07:00:18.719037] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.719046] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.719053] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cede10) 00:23:04.626 [2024-05-15 07:00:18.719064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.626 [2024-05-15 07:00:18.719085] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dbf0, cid 0, qid 0 00:23:04.626 [2024-05-15 07:00:18.719251] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.626 [2024-05-15 07:00:18.719263] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.626 [2024-05-15 07:00:18.719270] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.719277] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dbf0) on tqpair=0x1cede10 00:23:04.626 [2024-05-15 07:00:18.719288] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:04.626 [2024-05-15 07:00:18.719296] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:04.626 [2024-05-15 07:00:18.719310] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:04.626 [2024-05-15 07:00:18.719420] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:04.626 [2024-05-15 07:00:18.719428] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:04.626 [2024-05-15 07:00:18.719462] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.719471] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.719478] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cede10) 00:23:04.626 [2024-05-15 07:00:18.719488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.626 [2024-05-15 07:00:18.719509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dbf0, cid 0, qid 0 00:23:04.626 [2024-05-15 07:00:18.719695] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.626 [2024-05-15 07:00:18.719710] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.626 [2024-05-15 07:00:18.719718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.719725] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dbf0) on tqpair=0x1cede10 00:23:04.626 [2024-05-15 07:00:18.719735] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:04.626 [2024-05-15 07:00:18.719752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.719761] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.719768] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cede10) 00:23:04.626 [2024-05-15 07:00:18.719779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.626 [2024-05-15 07:00:18.719800] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dbf0, cid 0, qid 0 00:23:04.626 [2024-05-15 07:00:18.719972] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.626 [2024-05-15 07:00:18.719988] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.626 [2024-05-15 07:00:18.719995] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720003] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dbf0) on tqpair=0x1cede10 00:23:04.626 [2024-05-15 07:00:18.720012] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:04.626 [2024-05-15 07:00:18.720021] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:04.626 [2024-05-15 07:00:18.720035] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:04.626 [2024-05-15 07:00:18.720050] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:04.626 [2024-05-15 07:00:18.720065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720073] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cede10) 00:23:04.626 [2024-05-15 07:00:18.720091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.626 [2024-05-15 07:00:18.720113] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dbf0, cid 0, qid 0 00:23:04.626 [2024-05-15 07:00:18.720326] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.626 [2024-05-15 07:00:18.720342] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.626 [2024-05-15 07:00:18.720350] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720357] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cede10): datao=0, datal=4096, cccid=0 00:23:04.626 [2024-05-15 07:00:18.720365] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6dbf0) on tqpair(0x1cede10): expected_datao=0, payload_size=4096 00:23:04.626 [2024-05-15 07:00:18.720383] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720393] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720483] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.626 [2024-05-15 07:00:18.720495] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.626 [2024-05-15 07:00:18.720502] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720509] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dbf0) on tqpair=0x1cede10 00:23:04.626 [2024-05-15 07:00:18.720522] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:04.626 [2024-05-15 07:00:18.720537] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:04.626 [2024-05-15 07:00:18.720546] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:04.626 [2024-05-15 07:00:18.720555] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:04.626 [2024-05-15 07:00:18.720563] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:04.626 [2024-05-15 07:00:18.720572] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:04.626 [2024-05-15 07:00:18.720586] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:04.626 [2024-05-15 07:00:18.720599] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720607] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720614] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cede10) 00:23:04.626 [2024-05-15 07:00:18.720641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:04.626 [2024-05-15 07:00:18.720662] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dbf0, cid 0, qid 0 00:23:04.626 [2024-05-15 07:00:18.720879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.626 [2024-05-15 07:00:18.720895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.626 [2024-05-15 07:00:18.720902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720909] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6dbf0) on tqpair=0x1cede10 00:23:04.626 [2024-05-15 07:00:18.720924] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720940] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720947] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1cede10) 00:23:04.626 [2024-05-15 07:00:18.720958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.626 [2024-05-15 07:00:18.720968] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720975] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.720982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1cede10) 00:23:04.626 [2024-05-15 07:00:18.720991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.626 [2024-05-15 07:00:18.721001] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.721008] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.721015] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1cede10) 00:23:04.626 [2024-05-15 07:00:18.721024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.626 [2024-05-15 07:00:18.721038] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.721046] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.721053] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.626 [2024-05-15 07:00:18.721062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.626 [2024-05-15 07:00:18.721071] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:04.626 [2024-05-15 07:00:18.721091] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:04.626 [2024-05-15 07:00:18.721104] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.721112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.626 [2024-05-15 07:00:18.721118] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cede10) 00:23:04.626 [2024-05-15 07:00:18.721129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.626 [2024-05-15 07:00:18.721152] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dbf0, cid 0, qid 0 00:23:04.626 [2024-05-15 07:00:18.721163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6dd50, cid 1, qid 0 00:23:04.626 [2024-05-15 07:00:18.721172] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6deb0, cid 2, qid 0 00:23:04.626 [2024-05-15 07:00:18.721180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.626 [2024-05-15 07:00:18.721188] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e170, cid 4, qid 0 00:23:04.626 [2024-05-15 07:00:18.721401] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.626 [2024-05-15 07:00:18.721417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.626 [2024-05-15 07:00:18.721425] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.721432] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e170) on tqpair=0x1cede10 00:23:04.627 [2024-05-15 07:00:18.721442] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:04.627 [2024-05-15 07:00:18.721452] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:04.627 [2024-05-15 07:00:18.721470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.721479] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.721486] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cede10) 00:23:04.627 [2024-05-15 07:00:18.721497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.627 [2024-05-15 07:00:18.721518] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e170, cid 4, qid 0 00:23:04.627 [2024-05-15 07:00:18.721691] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.627 [2024-05-15 07:00:18.721706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.627 [2024-05-15 07:00:18.721714] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.721720] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cede10): datao=0, datal=4096, cccid=4 00:23:04.627 [2024-05-15 07:00:18.721729] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6e170) on tqpair(0x1cede10): expected_datao=0, payload_size=4096 00:23:04.627 [2024-05-15 07:00:18.721788] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.721797] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.725942] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.627 [2024-05-15 07:00:18.725959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.627 [2024-05-15 07:00:18.725967] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.725974] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e170) on tqpair=0x1cede10 00:23:04.627 [2024-05-15 07:00:18.725995] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:04.627 [2024-05-15 07:00:18.726026] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.726037] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.726044] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cede10) 00:23:04.627 [2024-05-15 07:00:18.726055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.627 [2024-05-15 07:00:18.726067] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.726075] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.726081] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1cede10) 00:23:04.627 [2024-05-15 07:00:18.726091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.627 [2024-05-15 07:00:18.726121] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e170, cid 4, qid 0 00:23:04.627 [2024-05-15 07:00:18.726134] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e2d0, cid 5, qid 0 00:23:04.627 [2024-05-15 07:00:18.726371] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.627 [2024-05-15 07:00:18.726387] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.627 [2024-05-15 07:00:18.726395] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.726401] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cede10): datao=0, datal=1024, cccid=4 00:23:04.627 [2024-05-15 07:00:18.726410] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6e170) on tqpair(0x1cede10): expected_datao=0, payload_size=1024 00:23:04.627 [2024-05-15 07:00:18.726421] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.726429] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.726438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.627 [2024-05-15 07:00:18.726447] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.627 [2024-05-15 07:00:18.726455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.726462] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e2d0) on tqpair=0x1cede10 00:23:04.627 [2024-05-15 07:00:18.767116] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.627 [2024-05-15 07:00:18.767135] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.627 [2024-05-15 07:00:18.767143] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.767151] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e170) on tqpair=0x1cede10 00:23:04.627 [2024-05-15 07:00:18.767170] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.767180] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.767187] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cede10) 00:23:04.627 [2024-05-15 07:00:18.767198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.627 [2024-05-15 07:00:18.767228] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e170, cid 4, qid 0 00:23:04.627 [2024-05-15 07:00:18.767418] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.627 [2024-05-15 07:00:18.767435] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.627 [2024-05-15 07:00:18.767443] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.767450] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cede10): datao=0, datal=3072, cccid=4 00:23:04.627 [2024-05-15 07:00:18.767458] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6e170) on tqpair(0x1cede10): expected_datao=0, payload_size=3072 00:23:04.627 [2024-05-15 07:00:18.767470] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.767478] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.767552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.627 [2024-05-15 07:00:18.767564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.627 [2024-05-15 07:00:18.767571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.767578] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e170) on tqpair=0x1cede10 00:23:04.627 [2024-05-15 07:00:18.767594] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.767603] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.767610] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1cede10) 00:23:04.627 [2024-05-15 07:00:18.767621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.627 [2024-05-15 07:00:18.767648] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e170, cid 4, qid 0 00:23:04.627 [2024-05-15 07:00:18.767836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.627 [2024-05-15 07:00:18.767852] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.627 [2024-05-15 07:00:18.767859] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.767866] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1cede10): datao=0, datal=8, cccid=4 00:23:04.627 [2024-05-15 07:00:18.767874] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d6e170) on tqpair(0x1cede10): expected_datao=0, payload_size=8 00:23:04.627 [2024-05-15 07:00:18.767885] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.767893] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.812945] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.627 [2024-05-15 07:00:18.812964] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.627 [2024-05-15 07:00:18.812972] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.627 [2024-05-15 07:00:18.812980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e170) on tqpair=0x1cede10 00:23:04.627 ===================================================== 00:23:04.627 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:04.627 ===================================================== 00:23:04.627 Controller Capabilities/Features 00:23:04.627 ================================ 00:23:04.627 Vendor ID: 0000 00:23:04.627 Subsystem Vendor ID: 0000 00:23:04.627 Serial Number: .................... 00:23:04.627 Model Number: ........................................ 00:23:04.627 Firmware Version: 24.01.1 00:23:04.627 Recommended Arb Burst: 0 00:23:04.627 IEEE OUI Identifier: 00 00 00 00:23:04.627 Multi-path I/O 00:23:04.627 May have multiple subsystem ports: No 00:23:04.627 May have multiple controllers: No 00:23:04.627 Associated with SR-IOV VF: No 00:23:04.627 Max Data Transfer Size: 131072 00:23:04.627 Max Number of Namespaces: 0 00:23:04.627 Max Number of I/O Queues: 1024 00:23:04.627 NVMe Specification Version (VS): 1.3 00:23:04.627 NVMe Specification Version (Identify): 1.3 00:23:04.627 Maximum Queue Entries: 128 00:23:04.627 Contiguous Queues Required: Yes 00:23:04.627 Arbitration Mechanisms Supported 00:23:04.627 Weighted Round Robin: Not Supported 00:23:04.627 Vendor Specific: Not Supported 00:23:04.627 Reset Timeout: 15000 ms 00:23:04.627 Doorbell Stride: 4 bytes 00:23:04.627 NVM Subsystem Reset: Not Supported 00:23:04.627 Command Sets Supported 00:23:04.627 NVM Command Set: Supported 00:23:04.627 Boot Partition: Not Supported 00:23:04.627 Memory Page Size Minimum: 4096 bytes 00:23:04.627 Memory Page Size Maximum: 4096 bytes 00:23:04.627 Persistent Memory Region: Not Supported 00:23:04.627 Optional Asynchronous Events Supported 00:23:04.627 Namespace Attribute Notices: Not Supported 00:23:04.627 Firmware Activation Notices: Not Supported 00:23:04.627 ANA Change Notices: Not Supported 00:23:04.627 PLE Aggregate Log Change Notices: Not Supported 00:23:04.627 LBA Status Info Alert Notices: Not Supported 00:23:04.627 EGE Aggregate Log Change Notices: Not Supported 00:23:04.627 Normal NVM Subsystem Shutdown event: Not Supported 00:23:04.627 Zone Descriptor Change Notices: Not Supported 00:23:04.627 Discovery Log Change Notices: Supported 00:23:04.627 Controller Attributes 00:23:04.627 128-bit Host Identifier: Not Supported 00:23:04.627 Non-Operational Permissive Mode: Not Supported 00:23:04.627 NVM Sets: Not Supported 00:23:04.627 Read Recovery Levels: Not Supported 00:23:04.627 Endurance Groups: Not Supported 00:23:04.627 Predictable Latency Mode: Not Supported 00:23:04.627 Traffic Based Keep ALive: Not Supported 00:23:04.627 Namespace Granularity: Not Supported 00:23:04.628 SQ Associations: Not Supported 00:23:04.628 UUID List: Not Supported 00:23:04.628 Multi-Domain Subsystem: Not Supported 00:23:04.628 Fixed Capacity Management: Not Supported 00:23:04.628 Variable Capacity Management: Not Supported 00:23:04.628 Delete Endurance Group: Not Supported 00:23:04.628 Delete NVM Set: Not Supported 00:23:04.628 Extended LBA Formats Supported: Not Supported 00:23:04.628 Flexible Data Placement Supported: Not Supported 00:23:04.628 00:23:04.628 Controller Memory Buffer Support 00:23:04.628 ================================ 00:23:04.628 Supported: No 00:23:04.628 00:23:04.628 Persistent Memory Region Support 00:23:04.628 ================================ 00:23:04.628 Supported: No 00:23:04.628 00:23:04.628 Admin Command Set Attributes 00:23:04.628 ============================ 00:23:04.628 Security Send/Receive: Not Supported 00:23:04.628 Format NVM: Not Supported 00:23:04.628 Firmware Activate/Download: Not Supported 00:23:04.628 Namespace Management: Not Supported 00:23:04.628 Device Self-Test: Not Supported 00:23:04.628 Directives: Not Supported 00:23:04.628 NVMe-MI: Not Supported 00:23:04.628 Virtualization Management: Not Supported 00:23:04.628 Doorbell Buffer Config: Not Supported 00:23:04.628 Get LBA Status Capability: Not Supported 00:23:04.628 Command & Feature Lockdown Capability: Not Supported 00:23:04.628 Abort Command Limit: 1 00:23:04.628 Async Event Request Limit: 4 00:23:04.628 Number of Firmware Slots: N/A 00:23:04.628 Firmware Slot 1 Read-Only: N/A 00:23:04.628 Firmware Activation Without Reset: N/A 00:23:04.628 Multiple Update Detection Support: N/A 00:23:04.628 Firmware Update Granularity: No Information Provided 00:23:04.628 Per-Namespace SMART Log: No 00:23:04.628 Asymmetric Namespace Access Log Page: Not Supported 00:23:04.628 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:04.628 Command Effects Log Page: Not Supported 00:23:04.628 Get Log Page Extended Data: Supported 00:23:04.628 Telemetry Log Pages: Not Supported 00:23:04.628 Persistent Event Log Pages: Not Supported 00:23:04.628 Supported Log Pages Log Page: May Support 00:23:04.628 Commands Supported & Effects Log Page: Not Supported 00:23:04.628 Feature Identifiers & Effects Log Page:May Support 00:23:04.628 NVMe-MI Commands & Effects Log Page: May Support 00:23:04.628 Data Area 4 for Telemetry Log: Not Supported 00:23:04.628 Error Log Page Entries Supported: 128 00:23:04.628 Keep Alive: Not Supported 00:23:04.628 00:23:04.628 NVM Command Set Attributes 00:23:04.628 ========================== 00:23:04.628 Submission Queue Entry Size 00:23:04.628 Max: 1 00:23:04.628 Min: 1 00:23:04.628 Completion Queue Entry Size 00:23:04.628 Max: 1 00:23:04.628 Min: 1 00:23:04.628 Number of Namespaces: 0 00:23:04.628 Compare Command: Not Supported 00:23:04.628 Write Uncorrectable Command: Not Supported 00:23:04.628 Dataset Management Command: Not Supported 00:23:04.628 Write Zeroes Command: Not Supported 00:23:04.628 Set Features Save Field: Not Supported 00:23:04.628 Reservations: Not Supported 00:23:04.628 Timestamp: Not Supported 00:23:04.628 Copy: Not Supported 00:23:04.628 Volatile Write Cache: Not Present 00:23:04.628 Atomic Write Unit (Normal): 1 00:23:04.628 Atomic Write Unit (PFail): 1 00:23:04.628 Atomic Compare & Write Unit: 1 00:23:04.628 Fused Compare & Write: Supported 00:23:04.628 Scatter-Gather List 00:23:04.628 SGL Command Set: Supported 00:23:04.628 SGL Keyed: Supported 00:23:04.628 SGL Bit Bucket Descriptor: Not Supported 00:23:04.628 SGL Metadata Pointer: Not Supported 00:23:04.628 Oversized SGL: Not Supported 00:23:04.628 SGL Metadata Address: Not Supported 00:23:04.628 SGL Offset: Supported 00:23:04.628 Transport SGL Data Block: Not Supported 00:23:04.628 Replay Protected Memory Block: Not Supported 00:23:04.628 00:23:04.628 Firmware Slot Information 00:23:04.628 ========================= 00:23:04.628 Active slot: 0 00:23:04.628 00:23:04.628 00:23:04.628 Error Log 00:23:04.628 ========= 00:23:04.628 00:23:04.628 Active Namespaces 00:23:04.628 ================= 00:23:04.628 Discovery Log Page 00:23:04.628 ================== 00:23:04.628 Generation Counter: 2 00:23:04.628 Number of Records: 2 00:23:04.628 Record Format: 0 00:23:04.628 00:23:04.628 Discovery Log Entry 0 00:23:04.628 ---------------------- 00:23:04.628 Transport Type: 3 (TCP) 00:23:04.628 Address Family: 1 (IPv4) 00:23:04.628 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:04.628 Entry Flags: 00:23:04.628 Duplicate Returned Information: 1 00:23:04.628 Explicit Persistent Connection Support for Discovery: 1 00:23:04.628 Transport Requirements: 00:23:04.628 Secure Channel: Not Required 00:23:04.628 Port ID: 0 (0x0000) 00:23:04.628 Controller ID: 65535 (0xffff) 00:23:04.628 Admin Max SQ Size: 128 00:23:04.628 Transport Service Identifier: 4420 00:23:04.628 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:04.628 Transport Address: 10.0.0.2 00:23:04.628 Discovery Log Entry 1 00:23:04.628 ---------------------- 00:23:04.628 Transport Type: 3 (TCP) 00:23:04.628 Address Family: 1 (IPv4) 00:23:04.628 Subsystem Type: 2 (NVM Subsystem) 00:23:04.628 Entry Flags: 00:23:04.628 Duplicate Returned Information: 0 00:23:04.628 Explicit Persistent Connection Support for Discovery: 0 00:23:04.628 Transport Requirements: 00:23:04.628 Secure Channel: Not Required 00:23:04.628 Port ID: 0 (0x0000) 00:23:04.628 Controller ID: 65535 (0xffff) 00:23:04.628 Admin Max SQ Size: 128 00:23:04.628 Transport Service Identifier: 4420 00:23:04.628 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:04.628 Transport Address: 10.0.0.2 [2024-05-15 07:00:18.813100] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:04.628 [2024-05-15 07:00:18.813126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.628 [2024-05-15 07:00:18.813139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.628 [2024-05-15 07:00:18.813150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.628 [2024-05-15 07:00:18.813161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.628 [2024-05-15 07:00:18.813178] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.628 [2024-05-15 07:00:18.813189] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.628 [2024-05-15 07:00:18.813196] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.628 [2024-05-15 07:00:18.813208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.628 [2024-05-15 07:00:18.813236] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.628 [2024-05-15 07:00:18.813441] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.628 [2024-05-15 07:00:18.813455] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.628 [2024-05-15 07:00:18.813463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.628 [2024-05-15 07:00:18.813470] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.628 [2024-05-15 07:00:18.813484] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.628 [2024-05-15 07:00:18.813492] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.628 [2024-05-15 07:00:18.813498] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.628 [2024-05-15 07:00:18.813509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.628 [2024-05-15 07:00:18.813535] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.628 [2024-05-15 07:00:18.813741] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.628 [2024-05-15 07:00:18.813757] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.628 [2024-05-15 07:00:18.813765] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.628 [2024-05-15 07:00:18.813772] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.628 [2024-05-15 07:00:18.813782] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:04.628 [2024-05-15 07:00:18.813790] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:04.628 [2024-05-15 07:00:18.813806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.628 [2024-05-15 07:00:18.813816] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.628 [2024-05-15 07:00:18.813822] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.628 [2024-05-15 07:00:18.813833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.628 [2024-05-15 07:00:18.813853] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.628 [2024-05-15 07:00:18.814031] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.628 [2024-05-15 07:00:18.814045] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.628 [2024-05-15 07:00:18.814052] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.628 [2024-05-15 07:00:18.814059] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.628 [2024-05-15 07:00:18.814077] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.628 [2024-05-15 07:00:18.814088] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.814095] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.629 [2024-05-15 07:00:18.814106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.629 [2024-05-15 07:00:18.814127] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.629 [2024-05-15 07:00:18.814310] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.629 [2024-05-15 07:00:18.814326] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.629 [2024-05-15 07:00:18.814333] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.814340] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.629 [2024-05-15 07:00:18.814360] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.814369] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.814381] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.629 [2024-05-15 07:00:18.814392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.629 [2024-05-15 07:00:18.814413] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.629 [2024-05-15 07:00:18.814575] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.629 [2024-05-15 07:00:18.814591] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.629 [2024-05-15 07:00:18.814599] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.814606] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.629 [2024-05-15 07:00:18.814624] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.814633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.814640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.629 [2024-05-15 07:00:18.814650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.629 [2024-05-15 07:00:18.814671] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.629 [2024-05-15 07:00:18.814829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.629 [2024-05-15 07:00:18.814842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.629 [2024-05-15 07:00:18.814849] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.814856] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.629 [2024-05-15 07:00:18.814873] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.814883] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.814890] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.629 [2024-05-15 07:00:18.814900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.629 [2024-05-15 07:00:18.814920] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.629 [2024-05-15 07:00:18.815108] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.629 [2024-05-15 07:00:18.815121] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.629 [2024-05-15 07:00:18.815129] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815136] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.629 [2024-05-15 07:00:18.815153] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815163] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815170] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.629 [2024-05-15 07:00:18.815180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.629 [2024-05-15 07:00:18.815201] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.629 [2024-05-15 07:00:18.815385] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.629 [2024-05-15 07:00:18.815401] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.629 [2024-05-15 07:00:18.815408] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815415] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.629 [2024-05-15 07:00:18.815433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815443] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815450] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.629 [2024-05-15 07:00:18.815464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.629 [2024-05-15 07:00:18.815485] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.629 [2024-05-15 07:00:18.815649] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.629 [2024-05-15 07:00:18.815661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.629 [2024-05-15 07:00:18.815669] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815676] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.629 [2024-05-15 07:00:18.815693] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815702] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815709] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.629 [2024-05-15 07:00:18.815720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.629 [2024-05-15 07:00:18.815740] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.629 [2024-05-15 07:00:18.815904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.629 [2024-05-15 07:00:18.815920] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.629 [2024-05-15 07:00:18.815927] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815941] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.629 [2024-05-15 07:00:18.815960] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815969] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.815976] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.629 [2024-05-15 07:00:18.815987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.629 [2024-05-15 07:00:18.816008] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.629 [2024-05-15 07:00:18.816190] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.629 [2024-05-15 07:00:18.816202] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.629 [2024-05-15 07:00:18.816209] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.816216] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.629 [2024-05-15 07:00:18.816234] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.816243] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.816250] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.629 [2024-05-15 07:00:18.816261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.629 [2024-05-15 07:00:18.816281] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.629 [2024-05-15 07:00:18.816460] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.629 [2024-05-15 07:00:18.816476] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.629 [2024-05-15 07:00:18.816483] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.816490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.629 [2024-05-15 07:00:18.816508] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.816518] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.629 [2024-05-15 07:00:18.816524] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.629 [2024-05-15 07:00:18.816541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.629 [2024-05-15 07:00:18.816563] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.629 [2024-05-15 07:00:18.816724] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.629 [2024-05-15 07:00:18.816737] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.630 [2024-05-15 07:00:18.816744] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.630 [2024-05-15 07:00:18.816751] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.630 [2024-05-15 07:00:18.816768] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.630 [2024-05-15 07:00:18.816778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.630 [2024-05-15 07:00:18.816785] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.630 [2024-05-15 07:00:18.816795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.630 [2024-05-15 07:00:18.816816] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.630 [2024-05-15 07:00:18.820942] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.630 [2024-05-15 07:00:18.820970] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.630 [2024-05-15 07:00:18.820978] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.630 [2024-05-15 07:00:18.820985] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.630 [2024-05-15 07:00:18.821018] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.630 [2024-05-15 07:00:18.821028] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.630 [2024-05-15 07:00:18.821035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1cede10) 00:23:04.630 [2024-05-15 07:00:18.821046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.630 [2024-05-15 07:00:18.821069] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d6e010, cid 3, qid 0 00:23:04.630 [2024-05-15 07:00:18.821247] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.630 [2024-05-15 07:00:18.821260] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.630 [2024-05-15 07:00:18.821267] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.630 [2024-05-15 07:00:18.821274] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d6e010) on tqpair=0x1cede10 00:23:04.630 [2024-05-15 07:00:18.821289] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:23:04.630 00:23:04.630 07:00:18 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:04.630 [2024-05-15 07:00:18.852545] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:04.630 [2024-05-15 07:00:18.852582] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid581694 ] 00:23:04.889 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.889 [2024-05-15 07:00:18.886699] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:04.889 [2024-05-15 07:00:18.886747] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:04.889 [2024-05-15 07:00:18.886757] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:04.889 [2024-05-15 07:00:18.886784] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:04.889 [2024-05-15 07:00:18.886795] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:04.889 [2024-05-15 07:00:18.887135] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:04.889 [2024-05-15 07:00:18.887174] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x192fe10 0 00:23:04.889 [2024-05-15 07:00:18.897941] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:04.889 [2024-05-15 07:00:18.897962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:04.889 [2024-05-15 07:00:18.897970] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:04.889 [2024-05-15 07:00:18.897976] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:04.889 [2024-05-15 07:00:18.898017] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.889 [2024-05-15 07:00:18.898028] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.889 [2024-05-15 07:00:18.898035] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192fe10) 00:23:04.889 [2024-05-15 07:00:18.898049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:04.889 [2024-05-15 07:00:18.898075] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afbf0, cid 0, qid 0 00:23:04.889 [2024-05-15 07:00:18.905950] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.889 [2024-05-15 07:00:18.905968] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.889 [2024-05-15 07:00:18.905990] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.889 [2024-05-15 07:00:18.905998] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19afbf0) on tqpair=0x192fe10 00:23:04.889 [2024-05-15 07:00:18.906013] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:04.889 [2024-05-15 07:00:18.906024] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:04.889 [2024-05-15 07:00:18.906033] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:04.889 [2024-05-15 07:00:18.906053] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.889 [2024-05-15 07:00:18.906062] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.889 [2024-05-15 07:00:18.906068] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192fe10) 00:23:04.889 [2024-05-15 07:00:18.906080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.889 [2024-05-15 07:00:18.906103] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afbf0, cid 0, qid 0 00:23:04.889 [2024-05-15 07:00:18.906313] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.890 [2024-05-15 07:00:18.906328] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.890 [2024-05-15 07:00:18.906335] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.906342] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19afbf0) on tqpair=0x192fe10 00:23:04.890 [2024-05-15 07:00:18.906364] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:04.890 [2024-05-15 07:00:18.906379] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:04.890 [2024-05-15 07:00:18.906391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.906399] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.906405] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192fe10) 00:23:04.890 [2024-05-15 07:00:18.906416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.890 [2024-05-15 07:00:18.906441] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afbf0, cid 0, qid 0 00:23:04.890 [2024-05-15 07:00:18.906640] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.890 [2024-05-15 07:00:18.906651] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.890 [2024-05-15 07:00:18.906658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.906665] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19afbf0) on tqpair=0x192fe10 00:23:04.890 [2024-05-15 07:00:18.906675] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:04.890 [2024-05-15 07:00:18.906689] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:04.890 [2024-05-15 07:00:18.906701] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.906709] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.906716] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192fe10) 00:23:04.890 [2024-05-15 07:00:18.906726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.890 [2024-05-15 07:00:18.906746] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afbf0, cid 0, qid 0 00:23:04.890 [2024-05-15 07:00:18.906911] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.890 [2024-05-15 07:00:18.906926] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.890 [2024-05-15 07:00:18.906946] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.906953] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19afbf0) on tqpair=0x192fe10 00:23:04.890 [2024-05-15 07:00:18.906963] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:04.890 [2024-05-15 07:00:18.906981] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.906990] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.906997] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192fe10) 00:23:04.890 [2024-05-15 07:00:18.907007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.890 [2024-05-15 07:00:18.907028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afbf0, cid 0, qid 0 00:23:04.890 [2024-05-15 07:00:18.907186] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.890 [2024-05-15 07:00:18.907198] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.890 [2024-05-15 07:00:18.907205] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.907212] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19afbf0) on tqpair=0x192fe10 00:23:04.890 [2024-05-15 07:00:18.907221] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:04.890 [2024-05-15 07:00:18.907229] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:04.890 [2024-05-15 07:00:18.907242] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:04.890 [2024-05-15 07:00:18.907352] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:04.890 [2024-05-15 07:00:18.907374] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:04.890 [2024-05-15 07:00:18.907386] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.907393] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.907400] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192fe10) 00:23:04.890 [2024-05-15 07:00:18.907414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.890 [2024-05-15 07:00:18.907440] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afbf0, cid 0, qid 0 00:23:04.890 [2024-05-15 07:00:18.907643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.890 [2024-05-15 07:00:18.907658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.890 [2024-05-15 07:00:18.907665] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.907672] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19afbf0) on tqpair=0x192fe10 00:23:04.890 [2024-05-15 07:00:18.907682] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:04.890 [2024-05-15 07:00:18.907699] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.907708] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.907714] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192fe10) 00:23:04.890 [2024-05-15 07:00:18.907725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.890 [2024-05-15 07:00:18.907746] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afbf0, cid 0, qid 0 00:23:04.890 [2024-05-15 07:00:18.907909] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.890 [2024-05-15 07:00:18.907924] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.890 [2024-05-15 07:00:18.907941] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.907949] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19afbf0) on tqpair=0x192fe10 00:23:04.890 [2024-05-15 07:00:18.907958] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:04.890 [2024-05-15 07:00:18.907967] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:04.890 [2024-05-15 07:00:18.907980] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:04.890 [2024-05-15 07:00:18.907994] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:04.890 [2024-05-15 07:00:18.908008] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.908015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.908022] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192fe10) 00:23:04.890 [2024-05-15 07:00:18.908033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.890 [2024-05-15 07:00:18.908054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afbf0, cid 0, qid 0 00:23:04.890 [2024-05-15 07:00:18.908277] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.890 [2024-05-15 07:00:18.908289] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.890 [2024-05-15 07:00:18.908296] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.908303] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192fe10): datao=0, datal=4096, cccid=0 00:23:04.890 [2024-05-15 07:00:18.908312] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19afbf0) on tqpair(0x192fe10): expected_datao=0, payload_size=4096 00:23:04.890 [2024-05-15 07:00:18.908372] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.908382] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.953939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.890 [2024-05-15 07:00:18.953962] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.890 [2024-05-15 07:00:18.953985] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.953992] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19afbf0) on tqpair=0x192fe10 00:23:04.890 [2024-05-15 07:00:18.954005] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:04.890 [2024-05-15 07:00:18.954018] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:04.890 [2024-05-15 07:00:18.954026] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:04.890 [2024-05-15 07:00:18.954033] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:04.890 [2024-05-15 07:00:18.954040] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:04.890 [2024-05-15 07:00:18.954049] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:04.890 [2024-05-15 07:00:18.954064] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:04.890 [2024-05-15 07:00:18.954076] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.954084] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.954090] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192fe10) 00:23:04.890 [2024-05-15 07:00:18.954102] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:04.890 [2024-05-15 07:00:18.954125] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afbf0, cid 0, qid 0 00:23:04.890 [2024-05-15 07:00:18.954323] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.890 [2024-05-15 07:00:18.954335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.890 [2024-05-15 07:00:18.954342] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.954349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19afbf0) on tqpair=0x192fe10 00:23:04.890 [2024-05-15 07:00:18.954360] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.954368] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.954374] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x192fe10) 00:23:04.890 [2024-05-15 07:00:18.954384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.890 [2024-05-15 07:00:18.954394] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.954401] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.890 [2024-05-15 07:00:18.954407] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x192fe10) 00:23:04.891 [2024-05-15 07:00:18.954416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.891 [2024-05-15 07:00:18.954425] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.954432] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.954439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x192fe10) 00:23:04.891 [2024-05-15 07:00:18.954447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.891 [2024-05-15 07:00:18.954457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.954463] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.954470] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192fe10) 00:23:04.891 [2024-05-15 07:00:18.954482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.891 [2024-05-15 07:00:18.954508] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:18.954526] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:18.954538] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.954545] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.954551] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192fe10) 00:23:04.891 [2024-05-15 07:00:18.954561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.891 [2024-05-15 07:00:18.954583] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afbf0, cid 0, qid 0 00:23:04.891 [2024-05-15 07:00:18.954609] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afd50, cid 1, qid 0 00:23:04.891 [2024-05-15 07:00:18.954618] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19afeb0, cid 2, qid 0 00:23:04.891 [2024-05-15 07:00:18.954625] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0010, cid 3, qid 0 00:23:04.891 [2024-05-15 07:00:18.954633] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0170, cid 4, qid 0 00:23:04.891 [2024-05-15 07:00:18.954858] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.891 [2024-05-15 07:00:18.954873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.891 [2024-05-15 07:00:18.954880] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.954887] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0170) on tqpair=0x192fe10 00:23:04.891 [2024-05-15 07:00:18.954897] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:04.891 [2024-05-15 07:00:18.954906] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:18.954920] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:18.954938] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:18.954950] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.954958] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.954964] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192fe10) 00:23:04.891 [2024-05-15 07:00:18.954975] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:04.891 [2024-05-15 07:00:18.954996] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0170, cid 4, qid 0 00:23:04.891 [2024-05-15 07:00:18.955183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.891 [2024-05-15 07:00:18.955198] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.891 [2024-05-15 07:00:18.955205] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.955212] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0170) on tqpair=0x192fe10 00:23:04.891 [2024-05-15 07:00:18.955266] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:18.955284] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:18.955298] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.955310] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.955317] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192fe10) 00:23:04.891 [2024-05-15 07:00:18.955328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.891 [2024-05-15 07:00:18.955363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0170, cid 4, qid 0 00:23:04.891 [2024-05-15 07:00:18.955630] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.891 [2024-05-15 07:00:18.955646] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.891 [2024-05-15 07:00:18.955653] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.955660] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192fe10): datao=0, datal=4096, cccid=4 00:23:04.891 [2024-05-15 07:00:18.955668] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0170) on tqpair(0x192fe10): expected_datao=0, payload_size=4096 00:23:04.891 [2024-05-15 07:00:18.955722] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.955731] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996110] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.891 [2024-05-15 07:00:18.996128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.891 [2024-05-15 07:00:18.996136] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996143] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0170) on tqpair=0x192fe10 00:23:04.891 [2024-05-15 07:00:18.996160] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:04.891 [2024-05-15 07:00:18.996184] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:18.996203] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:18.996216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996224] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996231] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192fe10) 00:23:04.891 [2024-05-15 07:00:18.996242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.891 [2024-05-15 07:00:18.996264] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0170, cid 4, qid 0 00:23:04.891 [2024-05-15 07:00:18.996461] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.891 [2024-05-15 07:00:18.996473] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.891 [2024-05-15 07:00:18.996480] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996487] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192fe10): datao=0, datal=4096, cccid=4 00:23:04.891 [2024-05-15 07:00:18.996495] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0170) on tqpair(0x192fe10): expected_datao=0, payload_size=4096 00:23:04.891 [2024-05-15 07:00:18.996506] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996514] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996590] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.891 [2024-05-15 07:00:18.996600] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.891 [2024-05-15 07:00:18.996607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996614] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0170) on tqpair=0x192fe10 00:23:04.891 [2024-05-15 07:00:18.996636] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:18.996657] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:18.996672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996679] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996686] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192fe10) 00:23:04.891 [2024-05-15 07:00:18.996697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.891 [2024-05-15 07:00:18.996718] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0170, cid 4, qid 0 00:23:04.891 [2024-05-15 07:00:18.996895] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.891 [2024-05-15 07:00:18.996907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.891 [2024-05-15 07:00:18.996914] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996920] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192fe10): datao=0, datal=4096, cccid=4 00:23:04.891 [2024-05-15 07:00:18.996928] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0170) on tqpair(0x192fe10): expected_datao=0, payload_size=4096 00:23:04.891 [2024-05-15 07:00:18.996989] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:18.996999] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:19.041943] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.891 [2024-05-15 07:00:19.041961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.891 [2024-05-15 07:00:19.041968] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.891 [2024-05-15 07:00:19.041990] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0170) on tqpair=0x192fe10 00:23:04.891 [2024-05-15 07:00:19.042005] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:19.042021] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:19.042036] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:19.042047] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:19.042056] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:19.042065] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:04.891 [2024-05-15 07:00:19.042073] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:04.891 [2024-05-15 07:00:19.042082] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:04.891 [2024-05-15 07:00:19.042102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042111] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192fe10) 00:23:04.892 [2024-05-15 07:00:19.042129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.892 [2024-05-15 07:00:19.042140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042147] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x192fe10) 00:23:04.892 [2024-05-15 07:00:19.042167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.892 [2024-05-15 07:00:19.042194] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0170, cid 4, qid 0 00:23:04.892 [2024-05-15 07:00:19.042206] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b02d0, cid 5, qid 0 00:23:04.892 [2024-05-15 07:00:19.042385] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.892 [2024-05-15 07:00:19.042397] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.892 [2024-05-15 07:00:19.042405] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042412] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0170) on tqpair=0x192fe10 00:23:04.892 [2024-05-15 07:00:19.042424] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.892 [2024-05-15 07:00:19.042433] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.892 [2024-05-15 07:00:19.042440] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042447] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b02d0) on tqpair=0x192fe10 00:23:04.892 [2024-05-15 07:00:19.042463] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042472] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042478] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x192fe10) 00:23:04.892 [2024-05-15 07:00:19.042489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.892 [2024-05-15 07:00:19.042509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b02d0, cid 5, qid 0 00:23:04.892 [2024-05-15 07:00:19.042679] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.892 [2024-05-15 07:00:19.042691] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.892 [2024-05-15 07:00:19.042698] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042705] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b02d0) on tqpair=0x192fe10 00:23:04.892 [2024-05-15 07:00:19.042721] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042730] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042737] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x192fe10) 00:23:04.892 [2024-05-15 07:00:19.042747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.892 [2024-05-15 07:00:19.042767] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b02d0, cid 5, qid 0 00:23:04.892 [2024-05-15 07:00:19.042939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.892 [2024-05-15 07:00:19.042952] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.892 [2024-05-15 07:00:19.042959] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b02d0) on tqpair=0x192fe10 00:23:04.892 [2024-05-15 07:00:19.042983] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042992] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.042999] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x192fe10) 00:23:04.892 [2024-05-15 07:00:19.043009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.892 [2024-05-15 07:00:19.043029] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b02d0, cid 5, qid 0 00:23:04.892 [2024-05-15 07:00:19.043203] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.892 [2024-05-15 07:00:19.043218] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.892 [2024-05-15 07:00:19.043228] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043236] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b02d0) on tqpair=0x192fe10 00:23:04.892 [2024-05-15 07:00:19.043257] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043267] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x192fe10) 00:23:04.892 [2024-05-15 07:00:19.043284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.892 [2024-05-15 07:00:19.043296] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043304] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043310] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x192fe10) 00:23:04.892 [2024-05-15 07:00:19.043319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.892 [2024-05-15 07:00:19.043331] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043338] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043344] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x192fe10) 00:23:04.892 [2024-05-15 07:00:19.043354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.892 [2024-05-15 07:00:19.043381] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043388] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043395] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x192fe10) 00:23:04.892 [2024-05-15 07:00:19.043404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.892 [2024-05-15 07:00:19.043425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b02d0, cid 5, qid 0 00:23:04.892 [2024-05-15 07:00:19.043451] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0170, cid 4, qid 0 00:23:04.892 [2024-05-15 07:00:19.043459] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0430, cid 6, qid 0 00:23:04.892 [2024-05-15 07:00:19.043467] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0590, cid 7, qid 0 00:23:04.892 [2024-05-15 07:00:19.043836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.892 [2024-05-15 07:00:19.043852] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.892 [2024-05-15 07:00:19.043859] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043866] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192fe10): datao=0, datal=8192, cccid=5 00:23:04.892 [2024-05-15 07:00:19.043874] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b02d0) on tqpair(0x192fe10): expected_datao=0, payload_size=8192 00:23:04.892 [2024-05-15 07:00:19.043886] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043894] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043903] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.892 [2024-05-15 07:00:19.043912] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.892 [2024-05-15 07:00:19.043918] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043925] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192fe10): datao=0, datal=512, cccid=4 00:23:04.892 [2024-05-15 07:00:19.043941] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0170) on tqpair(0x192fe10): expected_datao=0, payload_size=512 00:23:04.892 [2024-05-15 07:00:19.043956] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043964] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043973] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.892 [2024-05-15 07:00:19.043982] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.892 [2024-05-15 07:00:19.043989] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.043995] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192fe10): datao=0, datal=512, cccid=6 00:23:04.892 [2024-05-15 07:00:19.044003] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0430) on tqpair(0x192fe10): expected_datao=0, payload_size=512 00:23:04.892 [2024-05-15 07:00:19.044014] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.044021] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.044029] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:04.892 [2024-05-15 07:00:19.044038] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:04.892 [2024-05-15 07:00:19.044045] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.044051] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x192fe10): datao=0, datal=4096, cccid=7 00:23:04.892 [2024-05-15 07:00:19.044059] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0590) on tqpair(0x192fe10): expected_datao=0, payload_size=4096 00:23:04.892 [2024-05-15 07:00:19.044070] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.044077] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.044089] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.892 [2024-05-15 07:00:19.044099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.892 [2024-05-15 07:00:19.044105] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.044112] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b02d0) on tqpair=0x192fe10 00:23:04.892 [2024-05-15 07:00:19.044134] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.892 [2024-05-15 07:00:19.044145] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.892 [2024-05-15 07:00:19.044152] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.044159] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0170) on tqpair=0x192fe10 00:23:04.892 [2024-05-15 07:00:19.044174] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.892 [2024-05-15 07:00:19.044184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.892 [2024-05-15 07:00:19.044191] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.044198] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0430) on tqpair=0x192fe10 00:23:04.892 [2024-05-15 07:00:19.044210] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.892 [2024-05-15 07:00:19.044219] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.892 [2024-05-15 07:00:19.044226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.892 [2024-05-15 07:00:19.044248] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0590) on tqpair=0x192fe10 00:23:04.892 ===================================================== 00:23:04.893 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.893 ===================================================== 00:23:04.893 Controller Capabilities/Features 00:23:04.893 ================================ 00:23:04.893 Vendor ID: 8086 00:23:04.893 Subsystem Vendor ID: 8086 00:23:04.893 Serial Number: SPDK00000000000001 00:23:04.893 Model Number: SPDK bdev Controller 00:23:04.893 Firmware Version: 24.01.1 00:23:04.893 Recommended Arb Burst: 6 00:23:04.893 IEEE OUI Identifier: e4 d2 5c 00:23:04.893 Multi-path I/O 00:23:04.893 May have multiple subsystem ports: Yes 00:23:04.893 May have multiple controllers: Yes 00:23:04.893 Associated with SR-IOV VF: No 00:23:04.893 Max Data Transfer Size: 131072 00:23:04.893 Max Number of Namespaces: 32 00:23:04.893 Max Number of I/O Queues: 127 00:23:04.893 NVMe Specification Version (VS): 1.3 00:23:04.893 NVMe Specification Version (Identify): 1.3 00:23:04.893 Maximum Queue Entries: 128 00:23:04.893 Contiguous Queues Required: Yes 00:23:04.893 Arbitration Mechanisms Supported 00:23:04.893 Weighted Round Robin: Not Supported 00:23:04.893 Vendor Specific: Not Supported 00:23:04.893 Reset Timeout: 15000 ms 00:23:04.893 Doorbell Stride: 4 bytes 00:23:04.893 NVM Subsystem Reset: Not Supported 00:23:04.893 Command Sets Supported 00:23:04.893 NVM Command Set: Supported 00:23:04.893 Boot Partition: Not Supported 00:23:04.893 Memory Page Size Minimum: 4096 bytes 00:23:04.893 Memory Page Size Maximum: 4096 bytes 00:23:04.893 Persistent Memory Region: Not Supported 00:23:04.893 Optional Asynchronous Events Supported 00:23:04.893 Namespace Attribute Notices: Supported 00:23:04.893 Firmware Activation Notices: Not Supported 00:23:04.893 ANA Change Notices: Not Supported 00:23:04.893 PLE Aggregate Log Change Notices: Not Supported 00:23:04.893 LBA Status Info Alert Notices: Not Supported 00:23:04.893 EGE Aggregate Log Change Notices: Not Supported 00:23:04.893 Normal NVM Subsystem Shutdown event: Not Supported 00:23:04.893 Zone Descriptor Change Notices: Not Supported 00:23:04.893 Discovery Log Change Notices: Not Supported 00:23:04.893 Controller Attributes 00:23:04.893 128-bit Host Identifier: Supported 00:23:04.893 Non-Operational Permissive Mode: Not Supported 00:23:04.893 NVM Sets: Not Supported 00:23:04.893 Read Recovery Levels: Not Supported 00:23:04.893 Endurance Groups: Not Supported 00:23:04.893 Predictable Latency Mode: Not Supported 00:23:04.893 Traffic Based Keep ALive: Not Supported 00:23:04.893 Namespace Granularity: Not Supported 00:23:04.893 SQ Associations: Not Supported 00:23:04.893 UUID List: Not Supported 00:23:04.893 Multi-Domain Subsystem: Not Supported 00:23:04.893 Fixed Capacity Management: Not Supported 00:23:04.893 Variable Capacity Management: Not Supported 00:23:04.893 Delete Endurance Group: Not Supported 00:23:04.893 Delete NVM Set: Not Supported 00:23:04.893 Extended LBA Formats Supported: Not Supported 00:23:04.893 Flexible Data Placement Supported: Not Supported 00:23:04.893 00:23:04.893 Controller Memory Buffer Support 00:23:04.893 ================================ 00:23:04.893 Supported: No 00:23:04.893 00:23:04.893 Persistent Memory Region Support 00:23:04.893 ================================ 00:23:04.893 Supported: No 00:23:04.893 00:23:04.893 Admin Command Set Attributes 00:23:04.893 ============================ 00:23:04.893 Security Send/Receive: Not Supported 00:23:04.893 Format NVM: Not Supported 00:23:04.893 Firmware Activate/Download: Not Supported 00:23:04.893 Namespace Management: Not Supported 00:23:04.893 Device Self-Test: Not Supported 00:23:04.893 Directives: Not Supported 00:23:04.893 NVMe-MI: Not Supported 00:23:04.893 Virtualization Management: Not Supported 00:23:04.893 Doorbell Buffer Config: Not Supported 00:23:04.893 Get LBA Status Capability: Not Supported 00:23:04.893 Command & Feature Lockdown Capability: Not Supported 00:23:04.893 Abort Command Limit: 4 00:23:04.893 Async Event Request Limit: 4 00:23:04.893 Number of Firmware Slots: N/A 00:23:04.893 Firmware Slot 1 Read-Only: N/A 00:23:04.893 Firmware Activation Without Reset: N/A 00:23:04.893 Multiple Update Detection Support: N/A 00:23:04.893 Firmware Update Granularity: No Information Provided 00:23:04.893 Per-Namespace SMART Log: No 00:23:04.893 Asymmetric Namespace Access Log Page: Not Supported 00:23:04.893 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:04.893 Command Effects Log Page: Supported 00:23:04.893 Get Log Page Extended Data: Supported 00:23:04.893 Telemetry Log Pages: Not Supported 00:23:04.893 Persistent Event Log Pages: Not Supported 00:23:04.893 Supported Log Pages Log Page: May Support 00:23:04.893 Commands Supported & Effects Log Page: Not Supported 00:23:04.893 Feature Identifiers & Effects Log Page:May Support 00:23:04.893 NVMe-MI Commands & Effects Log Page: May Support 00:23:04.893 Data Area 4 for Telemetry Log: Not Supported 00:23:04.893 Error Log Page Entries Supported: 128 00:23:04.893 Keep Alive: Supported 00:23:04.893 Keep Alive Granularity: 10000 ms 00:23:04.893 00:23:04.893 NVM Command Set Attributes 00:23:04.893 ========================== 00:23:04.893 Submission Queue Entry Size 00:23:04.893 Max: 64 00:23:04.893 Min: 64 00:23:04.893 Completion Queue Entry Size 00:23:04.893 Max: 16 00:23:04.893 Min: 16 00:23:04.893 Number of Namespaces: 32 00:23:04.893 Compare Command: Supported 00:23:04.893 Write Uncorrectable Command: Not Supported 00:23:04.893 Dataset Management Command: Supported 00:23:04.893 Write Zeroes Command: Supported 00:23:04.893 Set Features Save Field: Not Supported 00:23:04.893 Reservations: Supported 00:23:04.893 Timestamp: Not Supported 00:23:04.893 Copy: Supported 00:23:04.893 Volatile Write Cache: Present 00:23:04.893 Atomic Write Unit (Normal): 1 00:23:04.893 Atomic Write Unit (PFail): 1 00:23:04.893 Atomic Compare & Write Unit: 1 00:23:04.893 Fused Compare & Write: Supported 00:23:04.893 Scatter-Gather List 00:23:04.893 SGL Command Set: Supported 00:23:04.893 SGL Keyed: Supported 00:23:04.893 SGL Bit Bucket Descriptor: Not Supported 00:23:04.893 SGL Metadata Pointer: Not Supported 00:23:04.893 Oversized SGL: Not Supported 00:23:04.893 SGL Metadata Address: Not Supported 00:23:04.893 SGL Offset: Supported 00:23:04.893 Transport SGL Data Block: Not Supported 00:23:04.893 Replay Protected Memory Block: Not Supported 00:23:04.893 00:23:04.893 Firmware Slot Information 00:23:04.893 ========================= 00:23:04.893 Active slot: 1 00:23:04.893 Slot 1 Firmware Revision: 24.01.1 00:23:04.893 00:23:04.893 00:23:04.893 Commands Supported and Effects 00:23:04.893 ============================== 00:23:04.893 Admin Commands 00:23:04.893 -------------- 00:23:04.893 Get Log Page (02h): Supported 00:23:04.893 Identify (06h): Supported 00:23:04.893 Abort (08h): Supported 00:23:04.893 Set Features (09h): Supported 00:23:04.893 Get Features (0Ah): Supported 00:23:04.893 Asynchronous Event Request (0Ch): Supported 00:23:04.893 Keep Alive (18h): Supported 00:23:04.893 I/O Commands 00:23:04.893 ------------ 00:23:04.893 Flush (00h): Supported LBA-Change 00:23:04.893 Write (01h): Supported LBA-Change 00:23:04.893 Read (02h): Supported 00:23:04.893 Compare (05h): Supported 00:23:04.893 Write Zeroes (08h): Supported LBA-Change 00:23:04.893 Dataset Management (09h): Supported LBA-Change 00:23:04.893 Copy (19h): Supported LBA-Change 00:23:04.893 Unknown (79h): Supported LBA-Change 00:23:04.893 Unknown (7Ah): Supported 00:23:04.893 00:23:04.893 Error Log 00:23:04.893 ========= 00:23:04.893 00:23:04.893 Arbitration 00:23:04.893 =========== 00:23:04.893 Arbitration Burst: 1 00:23:04.893 00:23:04.893 Power Management 00:23:04.893 ================ 00:23:04.893 Number of Power States: 1 00:23:04.893 Current Power State: Power State #0 00:23:04.893 Power State #0: 00:23:04.893 Max Power: 0.00 W 00:23:04.893 Non-Operational State: Operational 00:23:04.893 Entry Latency: Not Reported 00:23:04.893 Exit Latency: Not Reported 00:23:04.893 Relative Read Throughput: 0 00:23:04.893 Relative Read Latency: 0 00:23:04.893 Relative Write Throughput: 0 00:23:04.893 Relative Write Latency: 0 00:23:04.893 Idle Power: Not Reported 00:23:04.893 Active Power: Not Reported 00:23:04.893 Non-Operational Permissive Mode: Not Supported 00:23:04.893 00:23:04.893 Health Information 00:23:04.893 ================== 00:23:04.893 Critical Warnings: 00:23:04.893 Available Spare Space: OK 00:23:04.893 Temperature: OK 00:23:04.893 Device Reliability: OK 00:23:04.893 Read Only: No 00:23:04.893 Volatile Memory Backup: OK 00:23:04.893 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:04.893 Temperature Threshold: [2024-05-15 07:00:19.044385] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.893 [2024-05-15 07:00:19.044397] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.044404] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x192fe10) 00:23:04.894 [2024-05-15 07:00:19.044415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.894 [2024-05-15 07:00:19.044437] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0590, cid 7, qid 0 00:23:04.894 [2024-05-15 07:00:19.044659] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.894 [2024-05-15 07:00:19.044675] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.894 [2024-05-15 07:00:19.044684] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.044691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0590) on tqpair=0x192fe10 00:23:04.894 [2024-05-15 07:00:19.044736] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:04.894 [2024-05-15 07:00:19.044758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.894 [2024-05-15 07:00:19.044770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.894 [2024-05-15 07:00:19.044780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.894 [2024-05-15 07:00:19.044790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.894 [2024-05-15 07:00:19.044802] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.044810] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.044817] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192fe10) 00:23:04.894 [2024-05-15 07:00:19.044828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.894 [2024-05-15 07:00:19.044864] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0010, cid 3, qid 0 00:23:04.894 [2024-05-15 07:00:19.045103] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.894 [2024-05-15 07:00:19.045118] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.894 [2024-05-15 07:00:19.045125] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.045132] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0010) on tqpair=0x192fe10 00:23:04.894 [2024-05-15 07:00:19.045144] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.045152] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.045158] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192fe10) 00:23:04.894 [2024-05-15 07:00:19.045169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.894 [2024-05-15 07:00:19.045194] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0010, cid 3, qid 0 00:23:04.894 [2024-05-15 07:00:19.045374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.894 [2024-05-15 07:00:19.045389] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.894 [2024-05-15 07:00:19.045396] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.045403] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0010) on tqpair=0x192fe10 00:23:04.894 [2024-05-15 07:00:19.045412] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:04.894 [2024-05-15 07:00:19.045420] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:04.894 [2024-05-15 07:00:19.045436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.045445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.045452] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192fe10) 00:23:04.894 [2024-05-15 07:00:19.045462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.894 [2024-05-15 07:00:19.045483] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0010, cid 3, qid 0 00:23:04.894 [2024-05-15 07:00:19.045648] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.894 [2024-05-15 07:00:19.045667] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.894 [2024-05-15 07:00:19.045674] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.045681] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0010) on tqpair=0x192fe10 00:23:04.894 [2024-05-15 07:00:19.045699] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.045709] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.045715] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192fe10) 00:23:04.894 [2024-05-15 07:00:19.045726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.894 [2024-05-15 07:00:19.045746] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0010, cid 3, qid 0 00:23:04.894 [2024-05-15 07:00:19.045911] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.894 [2024-05-15 07:00:19.045923] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.894 [2024-05-15 07:00:19.049938] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.049950] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0010) on tqpair=0x192fe10 00:23:04.894 [2024-05-15 07:00:19.049985] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.049995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.050002] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x192fe10) 00:23:04.894 [2024-05-15 07:00:19.050013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.894 [2024-05-15 07:00:19.050036] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0010, cid 3, qid 0 00:23:04.894 [2024-05-15 07:00:19.050229] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:04.894 [2024-05-15 07:00:19.050245] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:04.894 [2024-05-15 07:00:19.050252] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:04.894 [2024-05-15 07:00:19.050259] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0010) on tqpair=0x192fe10 00:23:04.894 [2024-05-15 07:00:19.050274] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:23:04.894 0 Kelvin (-273 Celsius) 00:23:04.894 Available Spare: 0% 00:23:04.894 Available Spare Threshold: 0% 00:23:04.894 Life Percentage Used: 0% 00:23:04.894 Data Units Read: 0 00:23:04.894 Data Units Written: 0 00:23:04.894 Host Read Commands: 0 00:23:04.894 Host Write Commands: 0 00:23:04.894 Controller Busy Time: 0 minutes 00:23:04.894 Power Cycles: 0 00:23:04.894 Power On Hours: 0 hours 00:23:04.894 Unsafe Shutdowns: 0 00:23:04.894 Unrecoverable Media Errors: 0 00:23:04.894 Lifetime Error Log Entries: 0 00:23:04.894 Warning Temperature Time: 0 minutes 00:23:04.894 Critical Temperature Time: 0 minutes 00:23:04.894 00:23:04.894 Number of Queues 00:23:04.894 ================ 00:23:04.894 Number of I/O Submission Queues: 127 00:23:04.894 Number of I/O Completion Queues: 127 00:23:04.894 00:23:04.894 Active Namespaces 00:23:04.894 ================= 00:23:04.894 Namespace ID:1 00:23:04.894 Error Recovery Timeout: Unlimited 00:23:04.894 Command Set Identifier: NVM (00h) 00:23:04.894 Deallocate: Supported 00:23:04.894 Deallocated/Unwritten Error: Not Supported 00:23:04.894 Deallocated Read Value: Unknown 00:23:04.894 Deallocate in Write Zeroes: Not Supported 00:23:04.894 Deallocated Guard Field: 0xFFFF 00:23:04.894 Flush: Supported 00:23:04.894 Reservation: Supported 00:23:04.894 Namespace Sharing Capabilities: Multiple Controllers 00:23:04.894 Size (in LBAs): 131072 (0GiB) 00:23:04.894 Capacity (in LBAs): 131072 (0GiB) 00:23:04.894 Utilization (in LBAs): 131072 (0GiB) 00:23:04.894 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:04.894 EUI64: ABCDEF0123456789 00:23:04.894 UUID: 6d40a62e-0554-40d3-acdf-fe126c2226cc 00:23:04.894 Thin Provisioning: Not Supported 00:23:04.894 Per-NS Atomic Units: Yes 00:23:04.894 Atomic Boundary Size (Normal): 0 00:23:04.894 Atomic Boundary Size (PFail): 0 00:23:04.894 Atomic Boundary Offset: 0 00:23:04.894 Maximum Single Source Range Length: 65535 00:23:04.894 Maximum Copy Length: 65535 00:23:04.894 Maximum Source Range Count: 1 00:23:04.894 NGUID/EUI64 Never Reused: No 00:23:04.894 Namespace Write Protected: No 00:23:04.894 Number of LBA Formats: 1 00:23:04.895 Current LBA Format: LBA Format #00 00:23:04.895 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:04.895 00:23:04.895 07:00:19 -- host/identify.sh@51 -- # sync 00:23:04.895 07:00:19 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.895 07:00:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.895 07:00:19 -- common/autotest_common.sh@10 -- # set +x 00:23:04.895 07:00:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.895 07:00:19 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:04.895 07:00:19 -- host/identify.sh@56 -- # nvmftestfini 00:23:04.895 07:00:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:04.895 07:00:19 -- nvmf/common.sh@116 -- # sync 00:23:04.895 07:00:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:04.895 07:00:19 -- nvmf/common.sh@119 -- # set +e 00:23:04.895 07:00:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:04.895 07:00:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:04.895 rmmod nvme_tcp 00:23:04.895 rmmod nvme_fabrics 00:23:04.895 rmmod nvme_keyring 00:23:05.152 07:00:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:05.152 07:00:19 -- nvmf/common.sh@123 -- # set -e 00:23:05.152 07:00:19 -- nvmf/common.sh@124 -- # return 0 00:23:05.152 07:00:19 -- nvmf/common.sh@477 -- # '[' -n 581515 ']' 00:23:05.152 07:00:19 -- nvmf/common.sh@478 -- # killprocess 581515 00:23:05.152 07:00:19 -- common/autotest_common.sh@926 -- # '[' -z 581515 ']' 00:23:05.152 07:00:19 -- common/autotest_common.sh@930 -- # kill -0 581515 00:23:05.152 07:00:19 -- common/autotest_common.sh@931 -- # uname 00:23:05.152 07:00:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:05.152 07:00:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 581515 00:23:05.152 07:00:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:05.152 07:00:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:05.152 07:00:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 581515' 00:23:05.152 killing process with pid 581515 00:23:05.152 07:00:19 -- common/autotest_common.sh@945 -- # kill 581515 00:23:05.152 [2024-05-15 07:00:19.159275] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:05.152 07:00:19 -- common/autotest_common.sh@950 -- # wait 581515 00:23:05.411 07:00:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:05.411 07:00:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:05.411 07:00:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:05.411 07:00:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.411 07:00:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:05.411 07:00:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.411 07:00:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.411 07:00:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.315 07:00:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:07.315 00:23:07.315 real 0m6.599s 00:23:07.315 user 0m7.553s 00:23:07.315 sys 0m2.216s 00:23:07.315 07:00:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.315 07:00:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.315 ************************************ 00:23:07.315 END TEST nvmf_identify 00:23:07.315 ************************************ 00:23:07.315 07:00:21 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:07.315 07:00:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:07.315 07:00:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:07.315 07:00:21 -- common/autotest_common.sh@10 -- # set +x 00:23:07.315 ************************************ 00:23:07.315 START TEST nvmf_perf 00:23:07.315 ************************************ 00:23:07.315 07:00:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:07.573 * Looking for test storage... 00:23:07.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:07.573 07:00:21 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.573 07:00:21 -- nvmf/common.sh@7 -- # uname -s 00:23:07.573 07:00:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.573 07:00:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.573 07:00:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.573 07:00:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.573 07:00:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:07.573 07:00:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:07.573 07:00:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.573 07:00:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:07.573 07:00:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.573 07:00:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:07.573 07:00:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:07.573 07:00:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:07.573 07:00:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.573 07:00:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:07.573 07:00:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:07.573 07:00:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.573 07:00:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.573 07:00:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.573 07:00:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.573 07:00:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.573 07:00:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.573 07:00:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.573 07:00:21 -- paths/export.sh@5 -- # export PATH 00:23:07.573 07:00:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.573 07:00:21 -- nvmf/common.sh@46 -- # : 0 00:23:07.573 07:00:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:07.573 07:00:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:07.573 07:00:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:07.573 07:00:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.573 07:00:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.573 07:00:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:07.573 07:00:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:07.573 07:00:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:07.573 07:00:21 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:07.573 07:00:21 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:07.573 07:00:21 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:07.573 07:00:21 -- host/perf.sh@17 -- # nvmftestinit 00:23:07.573 07:00:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:07.573 07:00:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.573 07:00:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:07.574 07:00:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:07.574 07:00:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:07.574 07:00:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.574 07:00:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.574 07:00:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:07.574 07:00:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:07.574 07:00:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:07.574 07:00:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:07.574 07:00:21 -- common/autotest_common.sh@10 -- # set +x 00:23:10.101 07:00:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:10.101 07:00:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:10.101 07:00:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:10.101 07:00:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:10.101 07:00:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:10.101 07:00:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:10.101 07:00:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:10.101 07:00:23 -- nvmf/common.sh@294 -- # net_devs=() 00:23:10.101 07:00:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:10.101 07:00:23 -- nvmf/common.sh@295 -- # e810=() 00:23:10.101 07:00:23 -- nvmf/common.sh@295 -- # local -ga e810 00:23:10.101 07:00:23 -- nvmf/common.sh@296 -- # x722=() 00:23:10.101 07:00:23 -- nvmf/common.sh@296 -- # local -ga x722 00:23:10.101 07:00:23 -- nvmf/common.sh@297 -- # mlx=() 00:23:10.101 07:00:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:10.101 07:00:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.101 07:00:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.101 07:00:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.101 07:00:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.101 07:00:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.101 07:00:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.101 07:00:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.101 07:00:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.101 07:00:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.101 07:00:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.101 07:00:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.101 07:00:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:10.101 07:00:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:10.101 07:00:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:10.101 07:00:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:10.101 07:00:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:10.101 07:00:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:10.101 07:00:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:10.101 07:00:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:10.101 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:10.101 07:00:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:10.102 07:00:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:10.102 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:10.102 07:00:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:10.102 07:00:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:10.102 07:00:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.102 07:00:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:10.102 07:00:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.102 07:00:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:10.102 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:10.102 07:00:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.102 07:00:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:10.102 07:00:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.102 07:00:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:10.102 07:00:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.102 07:00:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:10.102 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:10.102 07:00:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.102 07:00:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:10.102 07:00:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:10.102 07:00:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:10.102 07:00:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:10.102 07:00:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.102 07:00:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.102 07:00:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.102 07:00:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:10.102 07:00:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.102 07:00:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.102 07:00:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:10.102 07:00:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.102 07:00:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.102 07:00:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:10.102 07:00:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:10.102 07:00:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.102 07:00:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.102 07:00:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.102 07:00:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.102 07:00:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:10.102 07:00:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.102 07:00:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.102 07:00:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.102 07:00:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:10.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:23:10.102 00:23:10.102 --- 10.0.0.2 ping statistics --- 00:23:10.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.102 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:23:10.102 07:00:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:23:10.102 00:23:10.102 --- 10.0.0.1 ping statistics --- 00:23:10.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.102 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:23:10.102 07:00:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.102 07:00:24 -- nvmf/common.sh@410 -- # return 0 00:23:10.102 07:00:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:10.102 07:00:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.102 07:00:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:10.102 07:00:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:10.102 07:00:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.102 07:00:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:10.102 07:00:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:10.102 07:00:24 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:10.102 07:00:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:10.102 07:00:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:10.102 07:00:24 -- common/autotest_common.sh@10 -- # set +x 00:23:10.102 07:00:24 -- nvmf/common.sh@469 -- # nvmfpid=584032 00:23:10.102 07:00:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:10.102 07:00:24 -- nvmf/common.sh@470 -- # waitforlisten 584032 00:23:10.102 07:00:24 -- common/autotest_common.sh@819 -- # '[' -z 584032 ']' 00:23:10.102 07:00:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.102 07:00:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:10.102 07:00:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.102 07:00:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:10.102 07:00:24 -- common/autotest_common.sh@10 -- # set +x 00:23:10.102 [2024-05-15 07:00:24.177559] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:23:10.102 [2024-05-15 07:00:24.177651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.102 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.102 [2024-05-15 07:00:24.255771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.360 [2024-05-15 07:00:24.367168] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:10.360 [2024-05-15 07:00:24.367322] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.360 [2024-05-15 07:00:24.367354] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.360 [2024-05-15 07:00:24.367367] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.360 [2024-05-15 07:00:24.367419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.360 [2024-05-15 07:00:24.367483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.360 [2024-05-15 07:00:24.367512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.360 [2024-05-15 07:00:24.367515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.925 07:00:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:10.925 07:00:25 -- common/autotest_common.sh@852 -- # return 0 00:23:10.925 07:00:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:10.925 07:00:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:10.925 07:00:25 -- common/autotest_common.sh@10 -- # set +x 00:23:10.925 07:00:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.925 07:00:25 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:10.925 07:00:25 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:14.233 07:00:28 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:14.233 07:00:28 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:14.491 07:00:28 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:23:14.491 07:00:28 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:14.491 07:00:28 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:14.491 07:00:28 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:23:14.491 07:00:28 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:14.491 07:00:28 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:14.491 07:00:28 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.748 [2024-05-15 07:00:28.927286] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.748 07:00:28 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:15.005 07:00:29 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:15.005 07:00:29 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:15.263 07:00:29 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:15.263 07:00:29 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:15.521 07:00:29 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.778 [2024-05-15 07:00:29.906969] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.778 07:00:29 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:16.036 07:00:30 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:23:16.036 07:00:30 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:23:16.036 07:00:30 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:16.036 07:00:30 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:23:17.408 Initializing NVMe Controllers 00:23:17.408 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:23:17.408 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:23:17.408 Initialization complete. Launching workers. 00:23:17.408 ======================================================== 00:23:17.408 Latency(us) 00:23:17.408 Device Information : IOPS MiB/s Average min max 00:23:17.408 PCIE (0000:88:00.0) NSID 1 from core 0: 86660.47 338.52 368.79 27.11 6702.37 00:23:17.408 ======================================================== 00:23:17.408 Total : 86660.47 338.52 368.79 27.11 6702.37 00:23:17.408 00:23:17.408 07:00:31 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:17.408 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.779 Initializing NVMe Controllers 00:23:18.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:18.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:18.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:18.779 Initialization complete. Launching workers. 00:23:18.779 ======================================================== 00:23:18.779 Latency(us) 00:23:18.779 Device Information : IOPS MiB/s Average min max 00:23:18.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 91.77 0.36 11069.88 234.29 45760.69 00:23:18.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 75.81 0.30 13702.45 7957.56 47888.31 00:23:18.779 ======================================================== 00:23:18.779 Total : 167.57 0.65 12260.81 234.29 47888.31 00:23:18.779 00:23:18.779 07:00:32 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:18.779 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.716 Initializing NVMe Controllers 00:23:19.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:19.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:19.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:19.716 Initialization complete. Launching workers. 00:23:19.716 ======================================================== 00:23:19.716 Latency(us) 00:23:19.716 Device Information : IOPS MiB/s Average min max 00:23:19.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7433.86 29.04 4302.76 938.56 12948.68 00:23:19.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2893.17 11.30 11102.58 5807.73 28994.56 00:23:19.716 ======================================================== 00:23:19.716 Total : 10327.03 40.34 6207.76 938.56 28994.56 00:23:19.716 00:23:19.716 07:00:33 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:19.716 07:00:33 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:19.716 07:00:33 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:19.716 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.243 Initializing NVMe Controllers 00:23:22.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:22.243 Controller IO queue size 128, less than required. 00:23:22.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.243 Controller IO queue size 128, less than required. 00:23:22.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:22.243 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:22.243 Initialization complete. Launching workers. 00:23:22.243 ======================================================== 00:23:22.243 Latency(us) 00:23:22.243 Device Information : IOPS MiB/s Average min max 00:23:22.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 766.41 191.60 174229.49 96175.60 214417.96 00:23:22.243 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.43 149.11 219190.23 85383.49 364704.26 00:23:22.243 ======================================================== 00:23:22.243 Total : 1362.84 340.71 193906.00 85383.49 364704.26 00:23:22.243 00:23:22.243 07:00:36 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:22.243 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.243 No valid NVMe controllers or AIO or URING devices found 00:23:22.243 Initializing NVMe Controllers 00:23:22.243 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:22.243 Controller IO queue size 128, less than required. 00:23:22.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.243 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:22.243 Controller IO queue size 128, less than required. 00:23:22.243 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:22.243 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:22.243 WARNING: Some requested NVMe devices were skipped 00:23:22.243 07:00:36 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:22.243 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.521 Initializing NVMe Controllers 00:23:25.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:25.521 Controller IO queue size 128, less than required. 00:23:25.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:25.521 Controller IO queue size 128, less than required. 00:23:25.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:25.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:25.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:25.522 Initialization complete. Launching workers. 00:23:25.522 00:23:25.522 ==================== 00:23:25.522 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:25.522 TCP transport: 00:23:25.522 polls: 26921 00:23:25.522 idle_polls: 8853 00:23:25.522 sock_completions: 18068 00:23:25.522 nvme_completions: 2904 00:23:25.522 submitted_requests: 4528 00:23:25.522 queued_requests: 1 00:23:25.522 00:23:25.522 ==================== 00:23:25.522 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:25.522 TCP transport: 00:23:25.522 polls: 24598 00:23:25.522 idle_polls: 9799 00:23:25.522 sock_completions: 14799 00:23:25.522 nvme_completions: 3040 00:23:25.522 submitted_requests: 4704 00:23:25.522 queued_requests: 1 00:23:25.522 ======================================================== 00:23:25.522 Latency(us) 00:23:25.522 Device Information : IOPS MiB/s Average min max 00:23:25.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 789.50 197.37 167907.75 109594.08 298948.93 00:23:25.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 823.50 205.87 157843.37 46690.53 244067.28 00:23:25.522 ======================================================== 00:23:25.522 Total : 1613.00 403.25 162769.49 46690.53 298948.93 00:23:25.522 00:23:25.522 07:00:39 -- host/perf.sh@66 -- # sync 00:23:25.522 07:00:39 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:25.522 07:00:39 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:23:25.522 07:00:39 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:23:25.522 07:00:39 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:23:28.796 07:00:42 -- host/perf.sh@72 -- # ls_guid=4e72fecb-5371-4acb-a3f7-8469ec15f0ed 00:23:28.796 07:00:42 -- host/perf.sh@73 -- # get_lvs_free_mb 4e72fecb-5371-4acb-a3f7-8469ec15f0ed 00:23:28.796 07:00:42 -- common/autotest_common.sh@1343 -- # local lvs_uuid=4e72fecb-5371-4acb-a3f7-8469ec15f0ed 00:23:28.796 07:00:42 -- common/autotest_common.sh@1344 -- # local lvs_info 00:23:28.796 07:00:42 -- common/autotest_common.sh@1345 -- # local fc 00:23:28.796 07:00:42 -- common/autotest_common.sh@1346 -- # local cs 00:23:28.796 07:00:42 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:29.052 07:00:43 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:23:29.052 { 00:23:29.052 "uuid": "4e72fecb-5371-4acb-a3f7-8469ec15f0ed", 00:23:29.052 "name": "lvs_0", 00:23:29.052 "base_bdev": "Nvme0n1", 00:23:29.052 "total_data_clusters": 238234, 00:23:29.052 "free_clusters": 238234, 00:23:29.052 "block_size": 512, 00:23:29.052 "cluster_size": 4194304 00:23:29.052 } 00:23:29.052 ]' 00:23:29.052 07:00:43 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="4e72fecb-5371-4acb-a3f7-8469ec15f0ed") .free_clusters' 00:23:29.052 07:00:43 -- common/autotest_common.sh@1348 -- # fc=238234 00:23:29.052 07:00:43 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="4e72fecb-5371-4acb-a3f7-8469ec15f0ed") .cluster_size' 00:23:29.052 07:00:43 -- common/autotest_common.sh@1349 -- # cs=4194304 00:23:29.052 07:00:43 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:23:29.052 07:00:43 -- common/autotest_common.sh@1353 -- # echo 952936 00:23:29.052 952936 00:23:29.052 07:00:43 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:23:29.052 07:00:43 -- host/perf.sh@78 -- # free_mb=20480 00:23:29.052 07:00:43 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4e72fecb-5371-4acb-a3f7-8469ec15f0ed lbd_0 20480 00:23:29.616 07:00:43 -- host/perf.sh@80 -- # lb_guid=9ef4cecc-40d8-4e6f-a02f-e16faa64a4dc 00:23:29.616 07:00:43 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 9ef4cecc-40d8-4e6f-a02f-e16faa64a4dc lvs_n_0 00:23:30.179 07:00:44 -- host/perf.sh@83 -- # ls_nested_guid=3f09b223-1faa-4ceb-91ad-837f9c460b10 00:23:30.179 07:00:44 -- host/perf.sh@84 -- # get_lvs_free_mb 3f09b223-1faa-4ceb-91ad-837f9c460b10 00:23:30.179 07:00:44 -- common/autotest_common.sh@1343 -- # local lvs_uuid=3f09b223-1faa-4ceb-91ad-837f9c460b10 00:23:30.179 07:00:44 -- common/autotest_common.sh@1344 -- # local lvs_info 00:23:30.179 07:00:44 -- common/autotest_common.sh@1345 -- # local fc 00:23:30.179 07:00:44 -- common/autotest_common.sh@1346 -- # local cs 00:23:30.179 07:00:44 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:30.436 07:00:44 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:23:30.436 { 00:23:30.436 "uuid": "4e72fecb-5371-4acb-a3f7-8469ec15f0ed", 00:23:30.436 "name": "lvs_0", 00:23:30.436 "base_bdev": "Nvme0n1", 00:23:30.436 "total_data_clusters": 238234, 00:23:30.436 "free_clusters": 233114, 00:23:30.436 "block_size": 512, 00:23:30.436 "cluster_size": 4194304 00:23:30.436 }, 00:23:30.436 { 00:23:30.436 "uuid": "3f09b223-1faa-4ceb-91ad-837f9c460b10", 00:23:30.436 "name": "lvs_n_0", 00:23:30.436 "base_bdev": "9ef4cecc-40d8-4e6f-a02f-e16faa64a4dc", 00:23:30.436 "total_data_clusters": 5114, 00:23:30.436 "free_clusters": 5114, 00:23:30.436 "block_size": 512, 00:23:30.436 "cluster_size": 4194304 00:23:30.436 } 00:23:30.436 ]' 00:23:30.436 07:00:44 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="3f09b223-1faa-4ceb-91ad-837f9c460b10") .free_clusters' 00:23:30.436 07:00:44 -- common/autotest_common.sh@1348 -- # fc=5114 00:23:30.436 07:00:44 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="3f09b223-1faa-4ceb-91ad-837f9c460b10") .cluster_size' 00:23:30.693 07:00:44 -- common/autotest_common.sh@1349 -- # cs=4194304 00:23:30.693 07:00:44 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:23:30.693 07:00:44 -- common/autotest_common.sh@1353 -- # echo 20456 00:23:30.693 20456 00:23:30.693 07:00:44 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:23:30.693 07:00:44 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3f09b223-1faa-4ceb-91ad-837f9c460b10 lbd_nest_0 20456 00:23:30.950 07:00:44 -- host/perf.sh@88 -- # lb_nested_guid=7cdc3863-2c9f-4d1a-a801-c39f54f04d15 00:23:30.950 07:00:44 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:31.207 07:00:45 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:23:31.207 07:00:45 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 7cdc3863-2c9f-4d1a-a801-c39f54f04d15 00:23:31.207 07:00:45 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.464 07:00:45 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:23:31.464 07:00:45 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:23:31.464 07:00:45 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:31.464 07:00:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:31.464 07:00:45 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:31.464 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.707 Initializing NVMe Controllers 00:23:43.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:43.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:43.707 Initialization complete. Launching workers. 00:23:43.707 ======================================================== 00:23:43.707 Latency(us) 00:23:43.707 Device Information : IOPS MiB/s Average min max 00:23:43.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.80 0.02 20981.60 260.26 45851.48 00:23:43.707 ======================================================== 00:23:43.708 Total : 47.80 0.02 20981.60 260.26 45851.48 00:23:43.708 00:23:43.708 07:00:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:43.708 07:00:56 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:43.708 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.671 Initializing NVMe Controllers 00:23:53.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:53.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:53.671 Initialization complete. Launching workers. 00:23:53.671 ======================================================== 00:23:53.671 Latency(us) 00:23:53.671 Device Information : IOPS MiB/s Average min max 00:23:53.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 85.08 10.63 11762.68 6019.82 17972.16 00:23:53.671 ======================================================== 00:23:53.671 Total : 85.08 10.63 11762.68 6019.82 17972.16 00:23:53.671 00:23:53.671 07:01:06 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:53.671 07:01:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:53.671 07:01:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:53.671 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.629 Initializing NVMe Controllers 00:24:03.629 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.629 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:03.629 Initialization complete. Launching workers. 00:24:03.629 ======================================================== 00:24:03.629 Latency(us) 00:24:03.629 Device Information : IOPS MiB/s Average min max 00:24:03.629 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6682.99 3.26 4787.77 313.16 12180.74 00:24:03.629 ======================================================== 00:24:03.629 Total : 6682.99 3.26 4787.77 313.16 12180.74 00:24:03.629 00:24:03.629 07:01:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:03.629 07:01:16 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:03.629 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.589 Initializing NVMe Controllers 00:24:13.589 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:13.589 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:13.589 Initialization complete. Launching workers. 00:24:13.589 ======================================================== 00:24:13.589 Latency(us) 00:24:13.589 Device Information : IOPS MiB/s Average min max 00:24:13.589 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1544.50 193.06 20747.27 2090.91 44078.81 00:24:13.589 ======================================================== 00:24:13.589 Total : 1544.50 193.06 20747.27 2090.91 44078.81 00:24:13.589 00:24:13.589 07:01:27 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:13.589 07:01:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:13.589 07:01:27 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.589 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.571 Initializing NVMe Controllers 00:24:23.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.571 Controller IO queue size 128, less than required. 00:24:23.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:23.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:23.571 Initialization complete. Launching workers. 00:24:23.571 ======================================================== 00:24:23.571 Latency(us) 00:24:23.571 Device Information : IOPS MiB/s Average min max 00:24:23.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12128.20 5.92 10558.65 1674.23 24759.71 00:24:23.572 ======================================================== 00:24:23.572 Total : 12128.20 5.92 10558.65 1674.23 24759.71 00:24:23.572 00:24:23.572 07:01:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:23.572 07:01:37 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.572 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.767 Initializing NVMe Controllers 00:24:35.767 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.767 Controller IO queue size 128, less than required. 00:24:35.767 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:35.767 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:35.767 Initialization complete. Launching workers. 00:24:35.767 ======================================================== 00:24:35.767 Latency(us) 00:24:35.767 Device Information : IOPS MiB/s Average min max 00:24:35.767 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1223.80 152.97 105191.78 24352.88 226934.31 00:24:35.767 ======================================================== 00:24:35.767 Total : 1223.80 152.97 105191.78 24352.88 226934.31 00:24:35.767 00:24:35.767 07:01:47 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:35.767 07:01:48 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7cdc3863-2c9f-4d1a-a801-c39f54f04d15 00:24:35.767 07:01:48 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:35.767 07:01:49 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9ef4cecc-40d8-4e6f-a02f-e16faa64a4dc 00:24:35.767 07:01:49 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:35.767 07:01:49 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:35.767 07:01:49 -- host/perf.sh@114 -- # nvmftestfini 00:24:35.767 07:01:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:35.767 07:01:49 -- nvmf/common.sh@116 -- # sync 00:24:35.767 07:01:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:35.767 07:01:49 -- nvmf/common.sh@119 -- # set +e 00:24:35.767 07:01:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:35.767 07:01:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:35.767 rmmod nvme_tcp 00:24:35.767 rmmod nvme_fabrics 00:24:35.767 rmmod nvme_keyring 00:24:35.767 07:01:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:35.767 07:01:49 -- nvmf/common.sh@123 -- # set -e 00:24:35.767 07:01:49 -- nvmf/common.sh@124 -- # return 0 00:24:35.767 07:01:49 -- nvmf/common.sh@477 -- # '[' -n 584032 ']' 00:24:35.767 07:01:49 -- nvmf/common.sh@478 -- # killprocess 584032 00:24:35.767 07:01:49 -- common/autotest_common.sh@926 -- # '[' -z 584032 ']' 00:24:35.767 07:01:49 -- common/autotest_common.sh@930 -- # kill -0 584032 00:24:35.767 07:01:49 -- common/autotest_common.sh@931 -- # uname 00:24:35.767 07:01:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:35.767 07:01:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 584032 00:24:35.767 07:01:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:35.767 07:01:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:35.767 07:01:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 584032' 00:24:35.767 killing process with pid 584032 00:24:35.767 07:01:49 -- common/autotest_common.sh@945 -- # kill 584032 00:24:35.767 07:01:49 -- common/autotest_common.sh@950 -- # wait 584032 00:24:37.665 07:01:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:37.665 07:01:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:37.665 07:01:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:37.665 07:01:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:37.665 07:01:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:37.665 07:01:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.665 07:01:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.665 07:01:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.566 07:01:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:39.566 00:24:39.566 real 1m31.905s 00:24:39.566 user 5m30.464s 00:24:39.566 sys 0m15.551s 00:24:39.566 07:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.566 07:01:53 -- common/autotest_common.sh@10 -- # set +x 00:24:39.566 ************************************ 00:24:39.566 END TEST nvmf_perf 00:24:39.566 ************************************ 00:24:39.566 07:01:53 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:39.566 07:01:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:39.566 07:01:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:39.566 07:01:53 -- common/autotest_common.sh@10 -- # set +x 00:24:39.566 ************************************ 00:24:39.566 START TEST nvmf_fio_host 00:24:39.566 ************************************ 00:24:39.566 07:01:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:39.566 * Looking for test storage... 00:24:39.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.566 07:01:53 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.566 07:01:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.566 07:01:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.566 07:01:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.566 07:01:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.566 07:01:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.566 07:01:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.566 07:01:53 -- paths/export.sh@5 -- # export PATH 00:24:39.566 07:01:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.566 07:01:53 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.566 07:01:53 -- nvmf/common.sh@7 -- # uname -s 00:24:39.566 07:01:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.566 07:01:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.566 07:01:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.566 07:01:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.566 07:01:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.566 07:01:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.566 07:01:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.566 07:01:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.566 07:01:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.566 07:01:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.566 07:01:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:39.566 07:01:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:39.566 07:01:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.566 07:01:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.566 07:01:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.566 07:01:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.566 07:01:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.566 07:01:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.566 07:01:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.566 07:01:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.567 07:01:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.567 07:01:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.567 07:01:53 -- paths/export.sh@5 -- # export PATH 00:24:39.567 07:01:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.567 07:01:53 -- nvmf/common.sh@46 -- # : 0 00:24:39.567 07:01:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:39.567 07:01:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:39.567 07:01:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:39.567 07:01:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.567 07:01:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.567 07:01:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:39.567 07:01:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:39.567 07:01:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:39.567 07:01:53 -- host/fio.sh@12 -- # nvmftestinit 00:24:39.567 07:01:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:39.567 07:01:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.567 07:01:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:39.567 07:01:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:39.567 07:01:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:39.567 07:01:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.567 07:01:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.567 07:01:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.567 07:01:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:39.567 07:01:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:39.567 07:01:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:39.567 07:01:53 -- common/autotest_common.sh@10 -- # set +x 00:24:42.096 07:01:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:42.096 07:01:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:42.096 07:01:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:42.096 07:01:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:42.096 07:01:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:42.096 07:01:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:42.096 07:01:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:42.096 07:01:56 -- nvmf/common.sh@294 -- # net_devs=() 00:24:42.096 07:01:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:42.096 07:01:56 -- nvmf/common.sh@295 -- # e810=() 00:24:42.096 07:01:56 -- nvmf/common.sh@295 -- # local -ga e810 00:24:42.096 07:01:56 -- nvmf/common.sh@296 -- # x722=() 00:24:42.096 07:01:56 -- nvmf/common.sh@296 -- # local -ga x722 00:24:42.096 07:01:56 -- nvmf/common.sh@297 -- # mlx=() 00:24:42.096 07:01:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:42.096 07:01:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.096 07:01:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.096 07:01:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.096 07:01:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.096 07:01:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.096 07:01:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.096 07:01:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.096 07:01:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.096 07:01:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.096 07:01:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.097 07:01:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.097 07:01:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:42.097 07:01:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:42.097 07:01:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:42.097 07:01:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:42.097 07:01:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:42.097 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:42.097 07:01:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:42.097 07:01:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:42.097 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:42.097 07:01:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:42.097 07:01:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:42.097 07:01:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.097 07:01:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:42.097 07:01:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.097 07:01:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:42.097 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:42.097 07:01:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.097 07:01:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:42.097 07:01:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.097 07:01:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:42.097 07:01:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.097 07:01:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:42.097 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:42.097 07:01:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.097 07:01:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:42.097 07:01:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:42.097 07:01:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:42.097 07:01:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.097 07:01:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.097 07:01:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.097 07:01:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:42.097 07:01:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.097 07:01:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.097 07:01:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:42.097 07:01:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.097 07:01:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.097 07:01:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:42.097 07:01:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:42.097 07:01:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.097 07:01:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.097 07:01:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.097 07:01:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.097 07:01:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:42.097 07:01:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.097 07:01:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.097 07:01:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.097 07:01:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:42.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:24:42.097 00:24:42.097 --- 10.0.0.2 ping statistics --- 00:24:42.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.097 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:24:42.097 07:01:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:24:42.097 00:24:42.097 --- 10.0.0.1 ping statistics --- 00:24:42.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.097 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:24:42.097 07:01:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.097 07:01:56 -- nvmf/common.sh@410 -- # return 0 00:24:42.097 07:01:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:42.097 07:01:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.097 07:01:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:42.097 07:01:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.097 07:01:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:42.097 07:01:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:42.097 07:01:56 -- host/fio.sh@14 -- # [[ y != y ]] 00:24:42.097 07:01:56 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:24:42.097 07:01:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:42.097 07:01:56 -- common/autotest_common.sh@10 -- # set +x 00:24:42.097 07:01:56 -- host/fio.sh@22 -- # nvmfpid=596772 00:24:42.097 07:01:56 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:42.097 07:01:56 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:42.097 07:01:56 -- host/fio.sh@26 -- # waitforlisten 596772 00:24:42.097 07:01:56 -- common/autotest_common.sh@819 -- # '[' -z 596772 ']' 00:24:42.097 07:01:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.097 07:01:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:42.097 07:01:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.097 07:01:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:42.097 07:01:56 -- common/autotest_common.sh@10 -- # set +x 00:24:42.097 [2024-05-15 07:01:56.252550] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:24:42.097 [2024-05-15 07:01:56.252630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.097 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.097 [2024-05-15 07:01:56.329874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:42.355 [2024-05-15 07:01:56.440614] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:42.355 [2024-05-15 07:01:56.440773] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.355 [2024-05-15 07:01:56.440791] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.355 [2024-05-15 07:01:56.440803] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.355 [2024-05-15 07:01:56.440855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.355 [2024-05-15 07:01:56.440915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.355 [2024-05-15 07:01:56.440980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:42.355 [2024-05-15 07:01:56.440984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.288 07:01:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:43.288 07:01:57 -- common/autotest_common.sh@852 -- # return 0 00:24:43.288 07:01:57 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:43.288 07:01:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:43.288 07:01:57 -- common/autotest_common.sh@10 -- # set +x 00:24:43.288 [2024-05-15 07:01:57.191346] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.288 07:01:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:43.288 07:01:57 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:24:43.288 07:01:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:43.288 07:01:57 -- common/autotest_common.sh@10 -- # set +x 00:24:43.288 07:01:57 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:43.288 07:01:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:43.288 07:01:57 -- common/autotest_common.sh@10 -- # set +x 00:24:43.288 Malloc1 00:24:43.288 07:01:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:43.288 07:01:57 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:43.288 07:01:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:43.288 07:01:57 -- common/autotest_common.sh@10 -- # set +x 00:24:43.288 07:01:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:43.288 07:01:57 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:43.288 07:01:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:43.288 07:01:57 -- common/autotest_common.sh@10 -- # set +x 00:24:43.288 07:01:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:43.288 07:01:57 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.288 07:01:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:43.288 07:01:57 -- common/autotest_common.sh@10 -- # set +x 00:24:43.288 [2024-05-15 07:01:57.263862] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.288 07:01:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:43.288 07:01:57 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:43.288 07:01:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:43.288 07:01:57 -- common/autotest_common.sh@10 -- # set +x 00:24:43.288 07:01:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:43.288 07:01:57 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:43.288 07:01:57 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:43.288 07:01:57 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:43.288 07:01:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:43.288 07:01:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:43.288 07:01:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:43.288 07:01:57 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.288 07:01:57 -- common/autotest_common.sh@1320 -- # shift 00:24:43.288 07:01:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:43.288 07:01:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.288 07:01:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.288 07:01:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:43.288 07:01:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:43.288 07:01:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:43.288 07:01:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:43.288 07:01:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:43.288 07:01:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:43.288 07:01:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:43.288 07:01:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:43.288 07:01:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:43.288 07:01:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:43.288 07:01:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:43.288 07:01:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:43.288 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:43.288 fio-3.35 00:24:43.288 Starting 1 thread 00:24:43.546 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.097 00:24:46.097 test: (groupid=0, jobs=1): err= 0: pid=597124: Wed May 15 07:01:59 2024 00:24:46.097 read: IOPS=9747, BW=38.1MiB/s (39.9MB/s)(76.4MiB/2006msec) 00:24:46.097 slat (nsec): min=1877, max=170791, avg=2493.63, stdev=1850.86 00:24:46.097 clat (usec): min=3473, max=12449, avg=7244.35, stdev=551.33 00:24:46.097 lat (usec): min=3502, max=12452, avg=7246.84, stdev=551.22 00:24:46.097 clat percentiles (usec): 00:24:46.097 | 1.00th=[ 5932], 5.00th=[ 6390], 10.00th=[ 6587], 20.00th=[ 6849], 00:24:46.097 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:24:46.097 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8094], 00:24:46.097 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[10552], 99.95th=[10945], 00:24:46.097 | 99.99th=[12387] 00:24:46.097 bw ( KiB/s): min=37976, max=39552, per=99.94%, avg=38966.00, stdev=734.91, samples=4 00:24:46.097 iops : min= 9494, max= 9888, avg=9741.50, stdev=183.73, samples=4 00:24:46.097 write: IOPS=9756, BW=38.1MiB/s (40.0MB/s)(76.5MiB/2006msec); 0 zone resets 00:24:46.097 slat (usec): min=2, max=123, avg= 2.60, stdev= 1.31 00:24:46.097 clat (usec): min=1424, max=10884, avg=5804.82, stdev=491.00 00:24:46.097 lat (usec): min=1433, max=10886, avg=5807.42, stdev=490.94 00:24:46.097 clat percentiles (usec): 00:24:46.097 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 5407], 00:24:46.098 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5932], 00:24:46.098 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6390], 95.00th=[ 6521], 00:24:46.098 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 8979], 99.95th=[10421], 00:24:46.098 | 99.99th=[10814] 00:24:46.098 bw ( KiB/s): min=38552, max=39488, per=100.00%, avg=39030.00, stdev=396.32, samples=4 00:24:46.098 iops : min= 9638, max= 9872, avg=9757.50, stdev=99.08, samples=4 00:24:46.098 lat (msec) : 2=0.01%, 4=0.09%, 10=99.81%, 20=0.09% 00:24:46.098 cpu : usr=51.47%, sys=38.35%, ctx=66, majf=0, minf=6 00:24:46.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:46.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:46.098 issued rwts: total=19553,19572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:46.098 00:24:46.098 Run status group 0 (all jobs): 00:24:46.098 READ: bw=38.1MiB/s (39.9MB/s), 38.1MiB/s-38.1MiB/s (39.9MB/s-39.9MB/s), io=76.4MiB (80.1MB), run=2006-2006msec 00:24:46.098 WRITE: bw=38.1MiB/s (40.0MB/s), 38.1MiB/s-38.1MiB/s (40.0MB/s-40.0MB/s), io=76.5MiB (80.2MB), run=2006-2006msec 00:24:46.098 07:01:59 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:46.098 07:01:59 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:46.098 07:01:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:46.098 07:01:59 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:46.098 07:01:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:46.098 07:01:59 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.098 07:01:59 -- common/autotest_common.sh@1320 -- # shift 00:24:46.098 07:01:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:46.098 07:01:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:46.098 07:01:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.098 07:01:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:46.098 07:01:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:46.098 07:01:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:46.098 07:01:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:46.098 07:01:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:46.098 07:01:59 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:46.098 07:01:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.098 07:01:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:46.098 07:01:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:46.098 07:01:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:46.098 07:01:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:46.098 07:01:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:46.098 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:46.098 fio-3.35 00:24:46.098 Starting 1 thread 00:24:46.098 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.627 00:24:48.627 test: (groupid=0, jobs=1): err= 0: pid=597472: Wed May 15 07:02:02 2024 00:24:48.628 read: IOPS=7741, BW=121MiB/s (127MB/s)(243MiB/2006msec) 00:24:48.628 slat (nsec): min=2973, max=99467, avg=3754.22, stdev=1816.57 00:24:48.628 clat (usec): min=3875, max=27984, avg=10111.85, stdev=2733.80 00:24:48.628 lat (usec): min=3879, max=27987, avg=10115.61, stdev=2733.97 00:24:48.628 clat percentiles (usec): 00:24:48.628 | 1.00th=[ 5080], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 7635], 00:24:48.628 | 30.00th=[ 8356], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[10683], 00:24:48.628 | 70.00th=[11469], 80.00th=[12518], 90.00th=[13698], 95.00th=[15008], 00:24:48.628 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19530], 99.95th=[20055], 00:24:48.628 | 99.99th=[21627] 00:24:48.628 bw ( KiB/s): min=57504, max=67904, per=50.81%, avg=62928.00, stdev=4494.04, samples=4 00:24:48.628 iops : min= 3594, max= 4244, avg=3933.00, stdev=280.88, samples=4 00:24:48.628 write: IOPS=4507, BW=70.4MiB/s (73.9MB/s)(129MiB/1834msec); 0 zone resets 00:24:48.628 slat (usec): min=30, max=136, avg=33.72, stdev= 4.97 00:24:48.628 clat (usec): min=5496, max=19250, avg=11218.27, stdev=2004.90 00:24:48.628 lat (usec): min=5528, max=19282, avg=11251.99, stdev=2005.38 00:24:48.628 clat percentiles (usec): 00:24:48.628 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9503], 00:24:48.628 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:24:48.628 | 70.00th=[11994], 80.00th=[12780], 90.00th=[14091], 95.00th=[15008], 00:24:48.628 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17957], 99.95th=[18744], 00:24:48.628 | 99.99th=[19268] 00:24:48.628 bw ( KiB/s): min=59328, max=70912, per=91.16%, avg=65744.00, stdev=5149.95, samples=4 00:24:48.628 iops : min= 3708, max= 4432, avg=4109.00, stdev=321.87, samples=4 00:24:48.628 lat (msec) : 4=0.01%, 10=44.20%, 20=55.77%, 50=0.03% 00:24:48.628 cpu : usr=76.47%, sys=20.29%, ctx=18, majf=0, minf=2 00:24:48.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:48.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:48.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:48.628 issued rwts: total=15529,8267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:48.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:48.628 00:24:48.628 Run status group 0 (all jobs): 00:24:48.628 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=243MiB (254MB), run=2006-2006msec 00:24:48.628 WRITE: bw=70.4MiB/s (73.9MB/s), 70.4MiB/s-70.4MiB/s (73.9MB/s-73.9MB/s), io=129MiB (135MB), run=1834-1834msec 00:24:48.628 07:02:02 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.628 07:02:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:48.628 07:02:02 -- common/autotest_common.sh@10 -- # set +x 00:24:48.628 07:02:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:48.628 07:02:02 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:24:48.628 07:02:02 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:24:48.628 07:02:02 -- host/fio.sh@49 -- # get_nvme_bdfs 00:24:48.628 07:02:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:24:48.628 07:02:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:24:48.628 07:02:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:48.628 07:02:02 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:48.628 07:02:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:24:48.628 07:02:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:24:48.628 07:02:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:24:48.628 07:02:02 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:24:48.628 07:02:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:48.628 07:02:02 -- common/autotest_common.sh@10 -- # set +x 00:24:51.902 Nvme0n1 00:24:51.902 07:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.902 07:02:05 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:24:51.902 07:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.902 07:02:05 -- common/autotest_common.sh@10 -- # set +x 00:24:54.437 07:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.437 07:02:08 -- host/fio.sh@51 -- # ls_guid=73a8ac84-4dc3-47bf-a532-ab6df0f4b0d4 00:24:54.437 07:02:08 -- host/fio.sh@52 -- # get_lvs_free_mb 73a8ac84-4dc3-47bf-a532-ab6df0f4b0d4 00:24:54.437 07:02:08 -- common/autotest_common.sh@1343 -- # local lvs_uuid=73a8ac84-4dc3-47bf-a532-ab6df0f4b0d4 00:24:54.437 07:02:08 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:54.437 07:02:08 -- common/autotest_common.sh@1345 -- # local fc 00:24:54.438 07:02:08 -- common/autotest_common.sh@1346 -- # local cs 00:24:54.438 07:02:08 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:54.438 07:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.438 07:02:08 -- common/autotest_common.sh@10 -- # set +x 00:24:54.438 07:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.438 07:02:08 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:54.438 { 00:24:54.438 "uuid": "73a8ac84-4dc3-47bf-a532-ab6df0f4b0d4", 00:24:54.438 "name": "lvs_0", 00:24:54.438 "base_bdev": "Nvme0n1", 00:24:54.438 "total_data_clusters": 930, 00:24:54.438 "free_clusters": 930, 00:24:54.438 "block_size": 512, 00:24:54.438 "cluster_size": 1073741824 00:24:54.438 } 00:24:54.438 ]' 00:24:54.438 07:02:08 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="73a8ac84-4dc3-47bf-a532-ab6df0f4b0d4") .free_clusters' 00:24:54.438 07:02:08 -- common/autotest_common.sh@1348 -- # fc=930 00:24:54.438 07:02:08 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="73a8ac84-4dc3-47bf-a532-ab6df0f4b0d4") .cluster_size' 00:24:54.438 07:02:08 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:24:54.438 07:02:08 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:24:54.438 07:02:08 -- common/autotest_common.sh@1353 -- # echo 952320 00:24:54.438 952320 00:24:54.438 07:02:08 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:24:54.438 07:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.438 07:02:08 -- common/autotest_common.sh@10 -- # set +x 00:24:54.438 b55f915b-eba4-4307-833f-d8aada594670 00:24:54.438 07:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.438 07:02:08 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:54.438 07:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.438 07:02:08 -- common/autotest_common.sh@10 -- # set +x 00:24:54.438 07:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.438 07:02:08 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:54.438 07:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.438 07:02:08 -- common/autotest_common.sh@10 -- # set +x 00:24:54.438 07:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.438 07:02:08 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:54.438 07:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.438 07:02:08 -- common/autotest_common.sh@10 -- # set +x 00:24:54.438 07:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.438 07:02:08 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:54.438 07:02:08 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:54.438 07:02:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:54.438 07:02:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:54.438 07:02:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:54.438 07:02:08 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:54.438 07:02:08 -- common/autotest_common.sh@1320 -- # shift 00:24:54.438 07:02:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:54.438 07:02:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:54.438 07:02:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:54.438 07:02:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:54.438 07:02:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:54.438 07:02:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:54.438 07:02:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:54.438 07:02:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:54.438 07:02:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:54.438 07:02:08 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:54.438 07:02:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:54.438 07:02:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:54.438 07:02:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:54.438 07:02:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:54.438 07:02:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:54.438 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:54.438 fio-3.35 00:24:54.438 Starting 1 thread 00:24:54.438 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.963 00:24:56.963 test: (groupid=0, jobs=1): err= 0: pid=598539: Wed May 15 07:02:11 2024 00:24:56.963 read: IOPS=6347, BW=24.8MiB/s (26.0MB/s)(49.8MiB/2008msec) 00:24:56.963 slat (nsec): min=1902, max=131553, avg=2583.53, stdev=2361.12 00:24:56.963 clat (usec): min=1227, max=171813, avg=11138.77, stdev=11413.05 00:24:56.963 lat (usec): min=1229, max=171850, avg=11141.36, stdev=11413.28 00:24:56.963 clat percentiles (msec): 00:24:56.963 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:24:56.963 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:24:56.963 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12], 00:24:56.963 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:24:56.963 | 99.99th=[ 171] 00:24:56.963 bw ( KiB/s): min=17704, max=28264, per=99.85%, avg=25352.00, stdev=5105.11, samples=4 00:24:56.963 iops : min= 4426, max= 7066, avg=6338.00, stdev=1276.28, samples=4 00:24:56.963 write: IOPS=6344, BW=24.8MiB/s (26.0MB/s)(49.8MiB/2008msec); 0 zone resets 00:24:56.963 slat (nsec): min=2005, max=93353, avg=2649.70, stdev=1773.55 00:24:56.963 clat (usec): min=644, max=170285, avg=8845.59, stdev=10709.02 00:24:56.963 lat (usec): min=646, max=170290, avg=8848.24, stdev=10709.21 00:24:56.963 clat percentiles (msec): 00:24:56.963 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:24:56.963 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:24:56.963 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 10], 00:24:56.963 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 171], 00:24:56.963 | 99.99th=[ 171] 00:24:56.963 bw ( KiB/s): min=18720, max=27696, per=99.95%, avg=25364.00, stdev=4431.34, samples=4 00:24:56.963 iops : min= 4680, max= 6924, avg=6341.00, stdev=1107.84, samples=4 00:24:56.963 lat (usec) : 750=0.01%, 1000=0.01% 00:24:56.963 lat (msec) : 2=0.04%, 4=0.11%, 10=67.21%, 20=32.12%, 250=0.50% 00:24:56.963 cpu : usr=49.73%, sys=42.00%, ctx=83, majf=0, minf=6 00:24:56.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:56.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:56.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:56.963 issued rwts: total=12746,12739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:56.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:56.963 00:24:56.963 Run status group 0 (all jobs): 00:24:56.963 READ: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.8MiB (52.2MB), run=2008-2008msec 00:24:56.963 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.8MiB (52.2MB), run=2008-2008msec 00:24:56.963 07:02:11 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:56.963 07:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.963 07:02:11 -- common/autotest_common.sh@10 -- # set +x 00:24:56.963 07:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.963 07:02:11 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:56.963 07:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.963 07:02:11 -- common/autotest_common.sh@10 -- # set +x 00:24:57.894 07:02:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.894 07:02:12 -- host/fio.sh@62 -- # ls_nested_guid=81c52631-8513-44ba-805b-7ad548d98b48 00:24:57.894 07:02:12 -- host/fio.sh@63 -- # get_lvs_free_mb 81c52631-8513-44ba-805b-7ad548d98b48 00:24:57.894 07:02:12 -- common/autotest_common.sh@1343 -- # local lvs_uuid=81c52631-8513-44ba-805b-7ad548d98b48 00:24:57.894 07:02:12 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:57.894 07:02:12 -- common/autotest_common.sh@1345 -- # local fc 00:24:57.894 07:02:12 -- common/autotest_common.sh@1346 -- # local cs 00:24:57.894 07:02:12 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:57.895 07:02:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.895 07:02:12 -- common/autotest_common.sh@10 -- # set +x 00:24:57.895 07:02:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:57.895 07:02:12 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:57.895 { 00:24:57.895 "uuid": "73a8ac84-4dc3-47bf-a532-ab6df0f4b0d4", 00:24:57.895 "name": "lvs_0", 00:24:57.895 "base_bdev": "Nvme0n1", 00:24:57.895 "total_data_clusters": 930, 00:24:57.895 "free_clusters": 0, 00:24:57.895 "block_size": 512, 00:24:57.895 "cluster_size": 1073741824 00:24:57.895 }, 00:24:57.895 { 00:24:57.895 "uuid": "81c52631-8513-44ba-805b-7ad548d98b48", 00:24:57.895 "name": "lvs_n_0", 00:24:57.895 "base_bdev": "b55f915b-eba4-4307-833f-d8aada594670", 00:24:57.895 "total_data_clusters": 237847, 00:24:57.895 "free_clusters": 237847, 00:24:57.895 "block_size": 512, 00:24:57.895 "cluster_size": 4194304 00:24:57.895 } 00:24:57.895 ]' 00:24:57.895 07:02:12 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="81c52631-8513-44ba-805b-7ad548d98b48") .free_clusters' 00:24:57.895 07:02:12 -- common/autotest_common.sh@1348 -- # fc=237847 00:24:57.895 07:02:12 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="81c52631-8513-44ba-805b-7ad548d98b48") .cluster_size' 00:24:57.895 07:02:12 -- common/autotest_common.sh@1349 -- # cs=4194304 00:24:57.895 07:02:12 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:24:57.895 07:02:12 -- common/autotest_common.sh@1353 -- # echo 951388 00:24:57.895 951388 00:24:57.895 07:02:12 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:24:57.895 07:02:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:57.895 07:02:12 -- common/autotest_common.sh@10 -- # set +x 00:24:58.459 498c08e8-fffb-4cdd-a2e8-1ad1bf0797e9 00:24:58.459 07:02:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:58.459 07:02:12 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:58.459 07:02:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:58.459 07:02:12 -- common/autotest_common.sh@10 -- # set +x 00:24:58.459 07:02:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:58.459 07:02:12 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:58.459 07:02:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:58.459 07:02:12 -- common/autotest_common.sh@10 -- # set +x 00:24:58.459 07:02:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:58.459 07:02:12 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:58.460 07:02:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:58.460 07:02:12 -- common/autotest_common.sh@10 -- # set +x 00:24:58.460 07:02:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:58.460 07:02:12 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:58.460 07:02:12 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:58.460 07:02:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:58.460 07:02:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:58.460 07:02:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:58.460 07:02:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.460 07:02:12 -- common/autotest_common.sh@1320 -- # shift 00:24:58.460 07:02:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:58.460 07:02:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.460 07:02:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.460 07:02:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:58.460 07:02:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:58.460 07:02:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:58.460 07:02:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:58.460 07:02:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.460 07:02:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:58.460 07:02:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:58.460 07:02:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:58.460 07:02:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:58.460 07:02:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:58.460 07:02:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:58.460 07:02:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:58.717 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:58.717 fio-3.35 00:24:58.717 Starting 1 thread 00:24:58.717 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.245 00:25:01.245 test: (groupid=0, jobs=1): err= 0: pid=599124: Wed May 15 07:02:15 2024 00:25:01.245 read: IOPS=6128, BW=23.9MiB/s (25.1MB/s)(48.1MiB/2009msec) 00:25:01.245 slat (nsec): min=1939, max=187724, avg=2559.49, stdev=2429.10 00:25:01.245 clat (usec): min=4461, max=20020, avg=11573.00, stdev=977.63 00:25:01.245 lat (usec): min=4468, max=20022, avg=11575.56, stdev=977.49 00:25:01.245 clat percentiles (usec): 00:25:01.245 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:25:01.245 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:25:01.245 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:25:01.245 | 99.00th=[13829], 99.50th=[13960], 99.90th=[17171], 99.95th=[18482], 00:25:01.245 | 99.99th=[18744] 00:25:01.245 bw ( KiB/s): min=23184, max=25048, per=99.90%, avg=24490.00, stdev=881.99, samples=4 00:25:01.245 iops : min= 5796, max= 6262, avg=6122.50, stdev=220.50, samples=4 00:25:01.245 write: IOPS=6112, BW=23.9MiB/s (25.0MB/s)(48.0MiB/2009msec); 0 zone resets 00:25:01.245 slat (usec): min=2, max=146, avg= 2.65, stdev= 1.69 00:25:01.245 clat (usec): min=2445, max=16066, avg=9179.27, stdev=864.33 00:25:01.245 lat (usec): min=2453, max=16069, avg=9181.92, stdev=864.28 00:25:01.245 clat percentiles (usec): 00:25:01.245 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8160], 20.00th=[ 8455], 00:25:01.245 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:25:01.245 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10552], 00:25:01.245 | 99.00th=[11076], 99.50th=[11338], 99.90th=[15008], 99.95th=[15926], 00:25:01.245 | 99.99th=[16057] 00:25:01.245 bw ( KiB/s): min=24216, max=24656, per=99.95%, avg=24438.00, stdev=204.63, samples=4 00:25:01.245 iops : min= 6054, max= 6164, avg=6109.50, stdev=51.16, samples=4 00:25:01.245 lat (msec) : 4=0.04%, 10=44.93%, 20=55.03%, 50=0.01% 00:25:01.245 cpu : usr=51.84%, sys=41.14%, ctx=76, majf=0, minf=6 00:25:01.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:25:01.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:01.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:01.245 issued rwts: total=12312,12280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:01.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:01.245 00:25:01.245 Run status group 0 (all jobs): 00:25:01.245 READ: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=48.1MiB (50.4MB), run=2009-2009msec 00:25:01.245 WRITE: bw=23.9MiB/s (25.0MB/s), 23.9MiB/s-23.9MiB/s (25.0MB/s-25.0MB/s), io=48.0MiB (50.3MB), run=2009-2009msec 00:25:01.245 07:02:15 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:01.245 07:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:01.245 07:02:15 -- common/autotest_common.sh@10 -- # set +x 00:25:01.245 07:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:01.245 07:02:15 -- host/fio.sh@72 -- # sync 00:25:01.245 07:02:15 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:25:01.245 07:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:01.245 07:02:15 -- common/autotest_common.sh@10 -- # set +x 00:25:04.531 07:02:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.531 07:02:18 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:25:04.531 07:02:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.531 07:02:18 -- common/autotest_common.sh@10 -- # set +x 00:25:04.531 07:02:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:04.531 07:02:18 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:25:04.531 07:02:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:04.531 07:02:18 -- common/autotest_common.sh@10 -- # set +x 00:25:07.877 07:02:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.877 07:02:21 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:25:07.877 07:02:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.877 07:02:21 -- common/autotest_common.sh@10 -- # set +x 00:25:07.877 07:02:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.877 07:02:21 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:25:07.877 07:02:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.877 07:02:21 -- common/autotest_common.sh@10 -- # set +x 00:25:09.254 07:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.254 07:02:23 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:25:09.254 07:02:23 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:25:09.254 07:02:23 -- host/fio.sh@84 -- # nvmftestfini 00:25:09.254 07:02:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:09.254 07:02:23 -- nvmf/common.sh@116 -- # sync 00:25:09.254 07:02:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:09.254 07:02:23 -- nvmf/common.sh@119 -- # set +e 00:25:09.254 07:02:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:09.254 07:02:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:09.254 rmmod nvme_tcp 00:25:09.254 rmmod nvme_fabrics 00:25:09.254 rmmod nvme_keyring 00:25:09.254 07:02:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:09.254 07:02:23 -- nvmf/common.sh@123 -- # set -e 00:25:09.254 07:02:23 -- nvmf/common.sh@124 -- # return 0 00:25:09.254 07:02:23 -- nvmf/common.sh@477 -- # '[' -n 596772 ']' 00:25:09.254 07:02:23 -- nvmf/common.sh@478 -- # killprocess 596772 00:25:09.254 07:02:23 -- common/autotest_common.sh@926 -- # '[' -z 596772 ']' 00:25:09.254 07:02:23 -- common/autotest_common.sh@930 -- # kill -0 596772 00:25:09.254 07:02:23 -- common/autotest_common.sh@931 -- # uname 00:25:09.254 07:02:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:09.254 07:02:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 596772 00:25:09.254 07:02:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:09.254 07:02:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:09.254 07:02:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 596772' 00:25:09.254 killing process with pid 596772 00:25:09.254 07:02:23 -- common/autotest_common.sh@945 -- # kill 596772 00:25:09.254 07:02:23 -- common/autotest_common.sh@950 -- # wait 596772 00:25:09.254 07:02:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:09.254 07:02:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:09.254 07:02:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:09.254 07:02:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.254 07:02:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:09.254 07:02:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.254 07:02:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.254 07:02:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.794 07:02:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:11.794 00:25:11.794 real 0m31.989s 00:25:11.794 user 1m53.245s 00:25:11.794 sys 0m6.678s 00:25:11.794 07:02:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.794 07:02:25 -- common/autotest_common.sh@10 -- # set +x 00:25:11.794 ************************************ 00:25:11.794 END TEST nvmf_fio_host 00:25:11.794 ************************************ 00:25:11.794 07:02:25 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:11.794 07:02:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:11.794 07:02:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:11.794 07:02:25 -- common/autotest_common.sh@10 -- # set +x 00:25:11.794 ************************************ 00:25:11.794 START TEST nvmf_failover 00:25:11.794 ************************************ 00:25:11.794 07:02:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:11.794 * Looking for test storage... 00:25:11.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.794 07:02:25 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.794 07:02:25 -- nvmf/common.sh@7 -- # uname -s 00:25:11.794 07:02:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.794 07:02:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.794 07:02:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.794 07:02:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.794 07:02:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.794 07:02:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.794 07:02:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.794 07:02:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.794 07:02:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.794 07:02:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.794 07:02:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.794 07:02:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:11.794 07:02:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.794 07:02:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.794 07:02:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.794 07:02:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.794 07:02:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.794 07:02:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.794 07:02:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.794 07:02:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.794 07:02:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.794 07:02:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.794 07:02:25 -- paths/export.sh@5 -- # export PATH 00:25:11.795 07:02:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.795 07:02:25 -- nvmf/common.sh@46 -- # : 0 00:25:11.795 07:02:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:11.795 07:02:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:11.795 07:02:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:11.795 07:02:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.795 07:02:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.795 07:02:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:11.795 07:02:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:11.795 07:02:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:11.795 07:02:25 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:11.795 07:02:25 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:11.795 07:02:25 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:11.795 07:02:25 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:11.795 07:02:25 -- host/failover.sh@18 -- # nvmftestinit 00:25:11.795 07:02:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:11.795 07:02:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.795 07:02:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:11.795 07:02:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:11.795 07:02:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:11.795 07:02:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.795 07:02:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.795 07:02:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.795 07:02:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:11.795 07:02:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:11.795 07:02:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:11.795 07:02:25 -- common/autotest_common.sh@10 -- # set +x 00:25:14.329 07:02:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:14.329 07:02:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:14.329 07:02:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:14.329 07:02:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:14.329 07:02:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:14.329 07:02:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:14.329 07:02:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:14.329 07:02:28 -- nvmf/common.sh@294 -- # net_devs=() 00:25:14.329 07:02:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:14.329 07:02:28 -- nvmf/common.sh@295 -- # e810=() 00:25:14.329 07:02:28 -- nvmf/common.sh@295 -- # local -ga e810 00:25:14.329 07:02:28 -- nvmf/common.sh@296 -- # x722=() 00:25:14.329 07:02:28 -- nvmf/common.sh@296 -- # local -ga x722 00:25:14.329 07:02:28 -- nvmf/common.sh@297 -- # mlx=() 00:25:14.329 07:02:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:14.329 07:02:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.329 07:02:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.329 07:02:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.329 07:02:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.329 07:02:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.329 07:02:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.329 07:02:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.329 07:02:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.329 07:02:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.329 07:02:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.329 07:02:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.329 07:02:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:14.329 07:02:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:14.329 07:02:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:14.329 07:02:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:14.329 07:02:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:14.329 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:14.329 07:02:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:14.329 07:02:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:14.329 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:14.329 07:02:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:14.329 07:02:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:14.329 07:02:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.329 07:02:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:14.329 07:02:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.329 07:02:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:14.329 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:14.329 07:02:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.329 07:02:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:14.329 07:02:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.329 07:02:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:14.329 07:02:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.329 07:02:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:14.329 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:14.329 07:02:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.329 07:02:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:14.329 07:02:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:14.329 07:02:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:14.329 07:02:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.329 07:02:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.329 07:02:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.329 07:02:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:14.329 07:02:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.329 07:02:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.329 07:02:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:14.329 07:02:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.329 07:02:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.329 07:02:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:14.329 07:02:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:14.329 07:02:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.329 07:02:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.329 07:02:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.329 07:02:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.329 07:02:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:14.329 07:02:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.329 07:02:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.329 07:02:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.329 07:02:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:14.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:25:14.329 00:25:14.329 --- 10.0.0.2 ping statistics --- 00:25:14.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.329 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:25:14.329 07:02:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:25:14.329 00:25:14.329 --- 10.0.0.1 ping statistics --- 00:25:14.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.329 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:25:14.329 07:02:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.329 07:02:28 -- nvmf/common.sh@410 -- # return 0 00:25:14.329 07:02:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:14.329 07:02:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.329 07:02:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:14.329 07:02:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.329 07:02:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:14.329 07:02:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:14.329 07:02:28 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:14.329 07:02:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:14.329 07:02:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:14.329 07:02:28 -- common/autotest_common.sh@10 -- # set +x 00:25:14.329 07:02:28 -- nvmf/common.sh@469 -- # nvmfpid=602678 00:25:14.329 07:02:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:14.329 07:02:28 -- nvmf/common.sh@470 -- # waitforlisten 602678 00:25:14.329 07:02:28 -- common/autotest_common.sh@819 -- # '[' -z 602678 ']' 00:25:14.329 07:02:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.329 07:02:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:14.329 07:02:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.329 07:02:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:14.329 07:02:28 -- common/autotest_common.sh@10 -- # set +x 00:25:14.329 [2024-05-15 07:02:28.225039] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:14.329 [2024-05-15 07:02:28.225129] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.329 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.329 [2024-05-15 07:02:28.300799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:14.329 [2024-05-15 07:02:28.406094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:14.329 [2024-05-15 07:02:28.406245] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.329 [2024-05-15 07:02:28.406262] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.329 [2024-05-15 07:02:28.406275] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.329 [2024-05-15 07:02:28.406371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.329 [2024-05-15 07:02:28.406403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.329 [2024-05-15 07:02:28.406405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.265 07:02:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:15.265 07:02:29 -- common/autotest_common.sh@852 -- # return 0 00:25:15.265 07:02:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:15.265 07:02:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:15.265 07:02:29 -- common/autotest_common.sh@10 -- # set +x 00:25:15.265 07:02:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.265 07:02:29 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:15.265 [2024-05-15 07:02:29.429776] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.265 07:02:29 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:15.522 Malloc0 00:25:15.522 07:02:29 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.780 07:02:29 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:16.038 07:02:30 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.295 [2024-05-15 07:02:30.499070] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.295 07:02:30 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.554 [2024-05-15 07:02:30.735730] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.554 07:02:30 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:16.811 [2024-05-15 07:02:30.964485] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:16.811 07:02:30 -- host/failover.sh@31 -- # bdevperf_pid=603005 00:25:16.811 07:02:30 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:16.811 07:02:30 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.811 07:02:30 -- host/failover.sh@34 -- # waitforlisten 603005 /var/tmp/bdevperf.sock 00:25:16.811 07:02:30 -- common/autotest_common.sh@819 -- # '[' -z 603005 ']' 00:25:16.811 07:02:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.811 07:02:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:16.811 07:02:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.811 07:02:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:16.811 07:02:30 -- common/autotest_common.sh@10 -- # set +x 00:25:17.749 07:02:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:17.749 07:02:31 -- common/autotest_common.sh@852 -- # return 0 00:25:17.749 07:02:31 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.315 NVMe0n1 00:25:18.315 07:02:32 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.884 00:25:18.884 07:02:32 -- host/failover.sh@39 -- # run_test_pid=603279 00:25:18.884 07:02:32 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:18.884 07:02:32 -- host/failover.sh@41 -- # sleep 1 00:25:19.818 07:02:33 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:20.079 [2024-05-15 07:02:34.057178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 [2024-05-15 07:02:34.057938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331d0 is same with the state(5) to be set 00:25:20.079 07:02:34 -- host/failover.sh@45 -- # sleep 3 00:25:23.370 07:02:37 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.370 00:25:23.370 07:02:37 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:23.636 [2024-05-15 07:02:37.668115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.636 [2024-05-15 07:02:37.668697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.668990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 [2024-05-15 07:02:37.669145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434720 is same with the state(5) to be set 00:25:23.637 07:02:37 -- host/failover.sh@50 -- # sleep 3 00:25:26.956 07:02:40 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.956 [2024-05-15 07:02:40.919563] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.956 07:02:40 -- host/failover.sh@55 -- # sleep 1 00:25:27.893 07:02:41 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:28.152 [2024-05-15 07:02:42.217271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.217997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.218009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.218020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.218031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 [2024-05-15 07:02:42.218043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1434e00 is same with the state(5) to be set 00:25:28.152 07:02:42 -- host/failover.sh@59 -- # wait 603279 00:25:34.730 0 00:25:34.730 07:02:47 -- host/failover.sh@61 -- # killprocess 603005 00:25:34.730 07:02:47 -- common/autotest_common.sh@926 -- # '[' -z 603005 ']' 00:25:34.730 07:02:47 -- common/autotest_common.sh@930 -- # kill -0 603005 00:25:34.730 07:02:47 -- common/autotest_common.sh@931 -- # uname 00:25:34.730 07:02:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:34.730 07:02:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 603005 00:25:34.730 07:02:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:34.730 07:02:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:34.730 07:02:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 603005' 00:25:34.730 killing process with pid 603005 00:25:34.730 07:02:48 -- common/autotest_common.sh@945 -- # kill 603005 00:25:34.730 07:02:48 -- common/autotest_common.sh@950 -- # wait 603005 00:25:34.730 07:02:48 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:34.730 [2024-05-15 07:02:31.023206] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:34.730 [2024-05-15 07:02:31.023297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603005 ] 00:25:34.730 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.730 [2024-05-15 07:02:31.093331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.730 [2024-05-15 07:02:31.199566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.730 Running I/O for 15 seconds... 00:25:34.730 [2024-05-15 07:02:34.058307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.730 [2024-05-15 07:02:34.058852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.730 [2024-05-15 07:02:34.058866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.058881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.058895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.058909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.058927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.058967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.058984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.058998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.731 [2024-05-15 07:02:34.059231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.731 [2024-05-15 07:02:34.059260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.731 [2024-05-15 07:02:34.059289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.731 [2024-05-15 07:02:34.059346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.731 [2024-05-15 07:02:34.059375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.731 [2024-05-15 07:02:34.059404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.731 [2024-05-15 07:02:34.059432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.731 [2024-05-15 07:02:34.059460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.731 [2024-05-15 07:02:34.059493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.731 [2024-05-15 07:02:34.059521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.731 [2024-05-15 07:02:34.059549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.059975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.059990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.060005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.060019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.060034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.060048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.060063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.060077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.060092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.731 [2024-05-15 07:02:34.060107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.731 [2024-05-15 07:02:34.060122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.060165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.060194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.060224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.060302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.060330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.060415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.060472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.060528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.060873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.060902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.060935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.060982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.060997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.061014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.061030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.061045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.061060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.061074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.061090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.061104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.061119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.061133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.061149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.061163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.061178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.732 [2024-05-15 07:02:34.061192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.061213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.061227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.061257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.061271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.061286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.061299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.061314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.061328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.732 [2024-05-15 07:02:34.061343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.732 [2024-05-15 07:02:34.061357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.733 [2024-05-15 07:02:34.061533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.733 [2024-05-15 07:02:34.061561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.733 [2024-05-15 07:02:34.061589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.733 [2024-05-15 07:02:34.061823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.733 [2024-05-15 07:02:34.061852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.733 [2024-05-15 07:02:34.061880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.733 [2024-05-15 07:02:34.061922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.061964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.061980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.733 [2024-05-15 07:02:34.061994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.062023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.062054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.062083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.062112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.062142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.062175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.062204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:34.062234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa0600 is same with the state(5) to be set 00:25:34.733 [2024-05-15 07:02:34.062265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.733 [2024-05-15 07:02:34.062296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.733 [2024-05-15 07:02:34.062308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118344 len:8 PRP1 0x0 PRP2 0x0 00:25:34.733 [2024-05-15 07:02:34.062321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062378] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1aa0600 was disconnected and freed. reset controller. 00:25:34.733 [2024-05-15 07:02:34.062404] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:34.733 [2024-05-15 07:02:34.062452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.733 [2024-05-15 07:02:34.062471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.733 [2024-05-15 07:02:34.062500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.733 [2024-05-15 07:02:34.062527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.733 [2024-05-15 07:02:34.062553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:34.062566] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.733 [2024-05-15 07:02:34.062619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a81bd0 (9): Bad file descriptor 00:25:34.733 [2024-05-15 07:02:34.064908] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.733 [2024-05-15 07:02:34.103721] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:34.733 [2024-05-15 07:02:37.669364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:37.669405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:37.669441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:37.669459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.733 [2024-05-15 07:02:37.669475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.733 [2024-05-15 07:02:37.669491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.669959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.669990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.734 [2024-05-15 07:02:37.670565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.734 [2024-05-15 07:02:37.670579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.670607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.670644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.670675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.670704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.670732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.670760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.670788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.670816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.735 [2024-05-15 07:02:37.670844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.670872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.735 [2024-05-15 07:02:37.670899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.670951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.670998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.735 [2024-05-15 07:02:37.671267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.735 [2024-05-15 07:02:37.671312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.735 [2024-05-15 07:02:37.671411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.735 [2024-05-15 07:02:37.671494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.735 [2024-05-15 07:02:37.671530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.735 [2024-05-15 07:02:37.671569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.735 [2024-05-15 07:02:37.671624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.735 [2024-05-15 07:02:37.671706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.735 [2024-05-15 07:02:37.671849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.735 [2024-05-15 07:02:37.671873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.671890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.671921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.671948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.671965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.672098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.672218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.672557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.672592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.672621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.672651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.672679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.672750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.672838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.672898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.672961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.672979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.672995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.673011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.673026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.673043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.673058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.673073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.673090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.673113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.673129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.673145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.673161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.673181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.673196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.673219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.736 [2024-05-15 07:02:37.673233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.673265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.736 [2024-05-15 07:02:37.673291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.736 [2024-05-15 07:02:37.673306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.737 [2024-05-15 07:02:37.673320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.737 [2024-05-15 07:02:37.673351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:37.673381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:37.673412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:37.673444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:37.673483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:37.673515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:37.673551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:37.673581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:37.673614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a8e050 is same with the state(5) to be set 00:25:34.737 [2024-05-15 07:02:37.673646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.737 [2024-05-15 07:02:37.673664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.737 [2024-05-15 07:02:37.673676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120016 len:8 PRP1 0x0 PRP2 0x0 00:25:34.737 [2024-05-15 07:02:37.673689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673749] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a8e050 was disconnected and freed. reset controller. 00:25:34.737 [2024-05-15 07:02:37.673768] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:34.737 [2024-05-15 07:02:37.673824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.737 [2024-05-15 07:02:37.673843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.737 [2024-05-15 07:02:37.673872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.737 [2024-05-15 07:02:37.673899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.737 [2024-05-15 07:02:37.673943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:37.673957] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.737 [2024-05-15 07:02:37.676139] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.737 [2024-05-15 07:02:37.676178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a81bd0 (9): Bad file descriptor 00:25:34.737 [2024-05-15 07:02:37.756701] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:34.737 [2024-05-15 07:02:42.218278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.218974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.218988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.219003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.737 [2024-05-15 07:02:42.219017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.737 [2024-05-15 07:02:42.219032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.738 [2024-05-15 07:02:42.219217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.738 [2024-05-15 07:02:42.219293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.738 [2024-05-15 07:02:42.219543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.738 [2024-05-15 07:02:42.219713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.738 [2024-05-15 07:02:42.219740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.738 [2024-05-15 07:02:42.219768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.738 [2024-05-15 07:02:42.219796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.738 [2024-05-15 07:02:42.219824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.738 [2024-05-15 07:02:42.219851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.738 [2024-05-15 07:02:42.219878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.219975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.219989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.220004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.220018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.220033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.220047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.220062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.220076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.220091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.220105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.220119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.220133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.220148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.220161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.220176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.220190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.220205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.738 [2024-05-15 07:02:42.220219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.738 [2024-05-15 07:02:42.220247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.739 [2024-05-15 07:02:42.220491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.739 [2024-05-15 07:02:42.220519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.739 [2024-05-15 07:02:42.220546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.739 [2024-05-15 07:02:42.220574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.739 [2024-05-15 07:02:42.220601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.739 [2024-05-15 07:02:42.220629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.739 [2024-05-15 07:02:42.220684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.739 [2024-05-15 07:02:42.220716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.739 [2024-05-15 07:02:42.220774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.220967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.220983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.739 [2024-05-15 07:02:42.220997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.221012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.221027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.221042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.221056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.221072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.221090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.221106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.221121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.221136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.221151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.221166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.221180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.221196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.221210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.221241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.221256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.221271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.739 [2024-05-15 07:02:42.221300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.739 [2024-05-15 07:02:42.221316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.221329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.221357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.221412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.221440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.221535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.221563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.221674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.221702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.221784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.221897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.740 [2024-05-15 07:02:42.221949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.221983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.221998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.222012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.222027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.222042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.222057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.222071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.222086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.222100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.222115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.222130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.222145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.740 [2024-05-15 07:02:42.222159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.222173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aae540 is same with the state(5) to be set 00:25:34.740 [2024-05-15 07:02:42.222190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.740 [2024-05-15 07:02:42.222202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.740 [2024-05-15 07:02:42.222214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102272 len:8 PRP1 0x0 PRP2 0x0 00:25:34.740 [2024-05-15 07:02:42.222228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.222291] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1aae540 was disconnected and freed. reset controller. 00:25:34.740 [2024-05-15 07:02:42.222319] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:34.740 [2024-05-15 07:02:42.222366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.740 [2024-05-15 07:02:42.222385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.222400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.740 [2024-05-15 07:02:42.222414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.222428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.740 [2024-05-15 07:02:42.222441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.222455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.740 [2024-05-15 07:02:42.222468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.740 [2024-05-15 07:02:42.222482] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.740 [2024-05-15 07:02:42.222536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a81bd0 (9): Bad file descriptor 00:25:34.740 [2024-05-15 07:02:42.224578] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.740 [2024-05-15 07:02:42.338184] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:34.740 00:25:34.740 Latency(us) 00:25:34.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.740 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:34.740 Verification LBA range: start 0x0 length 0x4000 00:25:34.740 NVMe0n1 : 15.01 12978.95 50.70 915.16 0.00 9196.00 1116.54 15146.10 00:25:34.740 =================================================================================================================== 00:25:34.740 Total : 12978.95 50.70 915.16 0.00 9196.00 1116.54 15146.10 00:25:34.740 Received shutdown signal, test time was about 15.000000 seconds 00:25:34.740 00:25:34.740 Latency(us) 00:25:34.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.741 =================================================================================================================== 00:25:34.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:34.741 07:02:48 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:34.741 07:02:48 -- host/failover.sh@65 -- # count=3 00:25:34.741 07:02:48 -- host/failover.sh@67 -- # (( count != 3 )) 00:25:34.741 07:02:48 -- host/failover.sh@73 -- # bdevperf_pid=605054 00:25:34.741 07:02:48 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:34.741 07:02:48 -- host/failover.sh@75 -- # waitforlisten 605054 /var/tmp/bdevperf.sock 00:25:34.741 07:02:48 -- common/autotest_common.sh@819 -- # '[' -z 605054 ']' 00:25:34.741 07:02:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:34.741 07:02:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:34.741 07:02:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:34.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:34.741 07:02:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:34.741 07:02:48 -- common/autotest_common.sh@10 -- # set +x 00:25:34.999 07:02:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:34.999 07:02:49 -- common/autotest_common.sh@852 -- # return 0 00:25:34.999 07:02:49 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:35.257 [2024-05-15 07:02:49.490045] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:35.515 07:02:49 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:35.772 [2024-05-15 07:02:49.750739] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:35.772 07:02:49 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:36.030 NVMe0n1 00:25:36.030 07:02:50 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:36.593 00:25:36.593 07:02:50 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.157 00:25:37.157 07:02:51 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.158 07:02:51 -- host/failover.sh@82 -- # grep -q NVMe0 00:25:37.158 07:02:51 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:37.415 07:02:51 -- host/failover.sh@87 -- # sleep 3 00:25:40.690 07:02:54 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:40.690 07:02:54 -- host/failover.sh@88 -- # grep -q NVMe0 00:25:40.690 07:02:54 -- host/failover.sh@90 -- # run_test_pid=605880 00:25:40.690 07:02:54 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:40.690 07:02:54 -- host/failover.sh@92 -- # wait 605880 00:25:42.062 0 00:25:42.062 07:02:55 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:42.062 [2024-05-15 07:02:48.283631] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:42.062 [2024-05-15 07:02:48.283718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid605054 ] 00:25:42.062 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.062 [2024-05-15 07:02:48.353791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.062 [2024-05-15 07:02:48.462481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.062 [2024-05-15 07:02:51.574020] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:42.062 [2024-05-15 07:02:51.574093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.062 [2024-05-15 07:02:51.574114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.062 [2024-05-15 07:02:51.574131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.062 [2024-05-15 07:02:51.574145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.062 [2024-05-15 07:02:51.574159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.062 [2024-05-15 07:02:51.574172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.062 [2024-05-15 07:02:51.574186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.062 [2024-05-15 07:02:51.574199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.062 [2024-05-15 07:02:51.574213] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:42.062 [2024-05-15 07:02:51.574248] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:42.062 [2024-05-15 07:02:51.574278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1418bd0 (9): Bad file descriptor 00:25:42.062 [2024-05-15 07:02:51.582839] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:42.062 Running I/O for 1 seconds... 00:25:42.062 00:25:42.062 Latency(us) 00:25:42.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.062 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:42.062 Verification LBA range: start 0x0 length 0x4000 00:25:42.062 NVMe0n1 : 1.01 13085.83 51.12 0.00 0.00 9745.11 1086.20 11845.03 00:25:42.062 =================================================================================================================== 00:25:42.062 Total : 13085.83 51.12 0.00 0.00 9745.11 1086.20 11845.03 00:25:42.062 07:02:55 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:42.062 07:02:55 -- host/failover.sh@95 -- # grep -q NVMe0 00:25:42.062 07:02:56 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:42.627 07:02:56 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:42.627 07:02:56 -- host/failover.sh@99 -- # grep -q NVMe0 00:25:42.627 07:02:56 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:42.884 07:02:57 -- host/failover.sh@101 -- # sleep 3 00:25:46.192 07:03:00 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:46.192 07:03:00 -- host/failover.sh@103 -- # grep -q NVMe0 00:25:46.192 07:03:00 -- host/failover.sh@108 -- # killprocess 605054 00:25:46.192 07:03:00 -- common/autotest_common.sh@926 -- # '[' -z 605054 ']' 00:25:46.192 07:03:00 -- common/autotest_common.sh@930 -- # kill -0 605054 00:25:46.192 07:03:00 -- common/autotest_common.sh@931 -- # uname 00:25:46.193 07:03:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:46.193 07:03:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 605054 00:25:46.193 07:03:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:46.193 07:03:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:46.193 07:03:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 605054' 00:25:46.193 killing process with pid 605054 00:25:46.193 07:03:00 -- common/autotest_common.sh@945 -- # kill 605054 00:25:46.193 07:03:00 -- common/autotest_common.sh@950 -- # wait 605054 00:25:46.450 07:03:00 -- host/failover.sh@110 -- # sync 00:25:46.450 07:03:00 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:46.708 07:03:00 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:46.708 07:03:00 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:46.708 07:03:00 -- host/failover.sh@116 -- # nvmftestfini 00:25:46.708 07:03:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:46.708 07:03:00 -- nvmf/common.sh@116 -- # sync 00:25:46.708 07:03:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:46.708 07:03:00 -- nvmf/common.sh@119 -- # set +e 00:25:46.708 07:03:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:46.708 07:03:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:46.708 rmmod nvme_tcp 00:25:46.708 rmmod nvme_fabrics 00:25:46.708 rmmod nvme_keyring 00:25:46.708 07:03:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:46.708 07:03:00 -- nvmf/common.sh@123 -- # set -e 00:25:46.708 07:03:00 -- nvmf/common.sh@124 -- # return 0 00:25:46.708 07:03:00 -- nvmf/common.sh@477 -- # '[' -n 602678 ']' 00:25:46.708 07:03:00 -- nvmf/common.sh@478 -- # killprocess 602678 00:25:46.708 07:03:00 -- common/autotest_common.sh@926 -- # '[' -z 602678 ']' 00:25:46.708 07:03:00 -- common/autotest_common.sh@930 -- # kill -0 602678 00:25:46.708 07:03:00 -- common/autotest_common.sh@931 -- # uname 00:25:46.708 07:03:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:46.708 07:03:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 602678 00:25:46.708 07:03:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:46.708 07:03:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:46.708 07:03:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 602678' 00:25:46.708 killing process with pid 602678 00:25:46.708 07:03:00 -- common/autotest_common.sh@945 -- # kill 602678 00:25:46.708 07:03:00 -- common/autotest_common.sh@950 -- # wait 602678 00:25:47.273 07:03:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:47.273 07:03:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:47.273 07:03:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:47.273 07:03:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:47.273 07:03:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:47.273 07:03:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.273 07:03:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:47.273 07:03:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.174 07:03:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:49.174 00:25:49.174 real 0m37.762s 00:25:49.174 user 2m11.649s 00:25:49.174 sys 0m6.646s 00:25:49.174 07:03:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:49.174 07:03:03 -- common/autotest_common.sh@10 -- # set +x 00:25:49.174 ************************************ 00:25:49.174 END TEST nvmf_failover 00:25:49.174 ************************************ 00:25:49.174 07:03:03 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:49.174 07:03:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:49.174 07:03:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:49.174 07:03:03 -- common/autotest_common.sh@10 -- # set +x 00:25:49.174 ************************************ 00:25:49.174 START TEST nvmf_discovery 00:25:49.174 ************************************ 00:25:49.174 07:03:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:49.174 * Looking for test storage... 00:25:49.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:49.174 07:03:03 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.174 07:03:03 -- nvmf/common.sh@7 -- # uname -s 00:25:49.174 07:03:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.174 07:03:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.174 07:03:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.174 07:03:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.174 07:03:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.174 07:03:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.174 07:03:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.174 07:03:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.174 07:03:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.174 07:03:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.174 07:03:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:49.174 07:03:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:49.174 07:03:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.174 07:03:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.174 07:03:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.174 07:03:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.174 07:03:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.174 07:03:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.174 07:03:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.174 07:03:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.174 07:03:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.174 07:03:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.174 07:03:03 -- paths/export.sh@5 -- # export PATH 00:25:49.174 07:03:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.174 07:03:03 -- nvmf/common.sh@46 -- # : 0 00:25:49.174 07:03:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:49.174 07:03:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:49.174 07:03:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:49.174 07:03:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.174 07:03:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.174 07:03:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:49.174 07:03:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:49.174 07:03:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:49.174 07:03:03 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:49.174 07:03:03 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:49.174 07:03:03 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:49.174 07:03:03 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:49.174 07:03:03 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:49.174 07:03:03 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:49.174 07:03:03 -- host/discovery.sh@25 -- # nvmftestinit 00:25:49.174 07:03:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:49.174 07:03:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.174 07:03:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:49.174 07:03:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:49.174 07:03:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:49.174 07:03:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.174 07:03:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:49.174 07:03:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.174 07:03:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:49.174 07:03:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:49.174 07:03:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:49.174 07:03:03 -- common/autotest_common.sh@10 -- # set +x 00:25:51.703 07:03:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:51.703 07:03:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:51.703 07:03:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:51.703 07:03:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:51.703 07:03:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:51.703 07:03:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:51.703 07:03:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:51.703 07:03:05 -- nvmf/common.sh@294 -- # net_devs=() 00:25:51.703 07:03:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:51.703 07:03:05 -- nvmf/common.sh@295 -- # e810=() 00:25:51.703 07:03:05 -- nvmf/common.sh@295 -- # local -ga e810 00:25:51.703 07:03:05 -- nvmf/common.sh@296 -- # x722=() 00:25:51.703 07:03:05 -- nvmf/common.sh@296 -- # local -ga x722 00:25:51.703 07:03:05 -- nvmf/common.sh@297 -- # mlx=() 00:25:51.703 07:03:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:51.703 07:03:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.703 07:03:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.703 07:03:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.703 07:03:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.703 07:03:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.703 07:03:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.703 07:03:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.703 07:03:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.703 07:03:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.703 07:03:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.703 07:03:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.703 07:03:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:51.703 07:03:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:51.703 07:03:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:51.703 07:03:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:51.703 07:03:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:51.703 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:51.703 07:03:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:51.703 07:03:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:51.703 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:51.703 07:03:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:51.703 07:03:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:51.703 07:03:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.703 07:03:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:51.703 07:03:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.703 07:03:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:51.703 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:51.703 07:03:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.703 07:03:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:51.703 07:03:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.703 07:03:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:51.703 07:03:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.703 07:03:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:51.703 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:51.703 07:03:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.703 07:03:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:51.703 07:03:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:51.703 07:03:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:51.703 07:03:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.703 07:03:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.703 07:03:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.703 07:03:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:51.703 07:03:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.703 07:03:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.703 07:03:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:51.703 07:03:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.703 07:03:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.703 07:03:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:51.703 07:03:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:51.703 07:03:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.703 07:03:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.703 07:03:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.703 07:03:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.703 07:03:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:51.703 07:03:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.703 07:03:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.703 07:03:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.703 07:03:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:51.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:25:51.703 00:25:51.703 --- 10.0.0.2 ping statistics --- 00:25:51.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.703 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:25:51.703 07:03:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:25:51.703 00:25:51.703 --- 10.0.0.1 ping statistics --- 00:25:51.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.703 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:25:51.703 07:03:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.703 07:03:05 -- nvmf/common.sh@410 -- # return 0 00:25:51.703 07:03:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:51.703 07:03:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:51.703 07:03:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:51.703 07:03:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:51.703 07:03:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:51.703 07:03:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:51.703 07:03:05 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:51.703 07:03:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:51.703 07:03:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:51.703 07:03:05 -- common/autotest_common.sh@10 -- # set +x 00:25:51.703 07:03:05 -- nvmf/common.sh@469 -- # nvmfpid=609037 00:25:51.703 07:03:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:51.703 07:03:05 -- nvmf/common.sh@470 -- # waitforlisten 609037 00:25:51.703 07:03:05 -- common/autotest_common.sh@819 -- # '[' -z 609037 ']' 00:25:51.703 07:03:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.703 07:03:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:51.704 07:03:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.704 07:03:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:51.704 07:03:05 -- common/autotest_common.sh@10 -- # set +x 00:25:51.704 [2024-05-15 07:03:05.864018] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:51.704 [2024-05-15 07:03:05.864103] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.704 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.961 [2024-05-15 07:03:05.938227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.961 [2024-05-15 07:03:06.044709] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:51.961 [2024-05-15 07:03:06.044864] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.961 [2024-05-15 07:03:06.044887] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.961 [2024-05-15 07:03:06.044898] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.961 [2024-05-15 07:03:06.044924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.894 07:03:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:52.894 07:03:06 -- common/autotest_common.sh@852 -- # return 0 00:25:52.894 07:03:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:52.894 07:03:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:52.894 07:03:06 -- common/autotest_common.sh@10 -- # set +x 00:25:52.894 07:03:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.894 07:03:06 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.894 07:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.894 07:03:06 -- common/autotest_common.sh@10 -- # set +x 00:25:52.894 [2024-05-15 07:03:06.836923] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.894 07:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.894 07:03:06 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:52.894 07:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.894 07:03:06 -- common/autotest_common.sh@10 -- # set +x 00:25:52.894 [2024-05-15 07:03:06.845110] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:52.894 07:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.894 07:03:06 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:52.894 07:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.894 07:03:06 -- common/autotest_common.sh@10 -- # set +x 00:25:52.894 null0 00:25:52.894 07:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.894 07:03:06 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:52.894 07:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.894 07:03:06 -- common/autotest_common.sh@10 -- # set +x 00:25:52.894 null1 00:25:52.894 07:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.894 07:03:06 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:52.894 07:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.894 07:03:06 -- common/autotest_common.sh@10 -- # set +x 00:25:52.894 07:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.894 07:03:06 -- host/discovery.sh@45 -- # hostpid=609190 00:25:52.895 07:03:06 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:52.895 07:03:06 -- host/discovery.sh@46 -- # waitforlisten 609190 /tmp/host.sock 00:25:52.895 07:03:06 -- common/autotest_common.sh@819 -- # '[' -z 609190 ']' 00:25:52.895 07:03:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:25:52.895 07:03:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:52.895 07:03:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:52.895 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:52.895 07:03:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:52.895 07:03:06 -- common/autotest_common.sh@10 -- # set +x 00:25:52.895 [2024-05-15 07:03:06.912589] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:25:52.895 [2024-05-15 07:03:06.912668] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid609190 ] 00:25:52.895 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.895 [2024-05-15 07:03:06.984600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.895 [2024-05-15 07:03:07.099297] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:52.895 [2024-05-15 07:03:07.099482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.830 07:03:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:53.830 07:03:07 -- common/autotest_common.sh@852 -- # return 0 00:25:53.830 07:03:07 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:53.830 07:03:07 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:53.830 07:03:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.830 07:03:07 -- common/autotest_common.sh@10 -- # set +x 00:25:53.830 07:03:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.830 07:03:07 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:53.830 07:03:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.830 07:03:07 -- common/autotest_common.sh@10 -- # set +x 00:25:53.830 07:03:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.830 07:03:07 -- host/discovery.sh@72 -- # notify_id=0 00:25:53.830 07:03:07 -- host/discovery.sh@78 -- # get_subsystem_names 00:25:53.830 07:03:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.830 07:03:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.830 07:03:07 -- common/autotest_common.sh@10 -- # set +x 00:25:53.830 07:03:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.830 07:03:07 -- host/discovery.sh@59 -- # sort 00:25:53.830 07:03:07 -- host/discovery.sh@59 -- # xargs 00:25:53.830 07:03:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.830 07:03:07 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:25:53.830 07:03:07 -- host/discovery.sh@79 -- # get_bdev_list 00:25:53.830 07:03:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.830 07:03:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.830 07:03:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.830 07:03:07 -- common/autotest_common.sh@10 -- # set +x 00:25:53.830 07:03:07 -- host/discovery.sh@55 -- # sort 00:25:53.830 07:03:07 -- host/discovery.sh@55 -- # xargs 00:25:53.830 07:03:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.830 07:03:07 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:25:53.830 07:03:07 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:53.830 07:03:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.830 07:03:07 -- common/autotest_common.sh@10 -- # set +x 00:25:53.830 07:03:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.830 07:03:07 -- host/discovery.sh@82 -- # get_subsystem_names 00:25:53.830 07:03:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.830 07:03:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.830 07:03:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.830 07:03:07 -- common/autotest_common.sh@10 -- # set +x 00:25:53.830 07:03:07 -- host/discovery.sh@59 -- # sort 00:25:53.830 07:03:07 -- host/discovery.sh@59 -- # xargs 00:25:53.830 07:03:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.830 07:03:08 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:25:53.830 07:03:08 -- host/discovery.sh@83 -- # get_bdev_list 00:25:53.830 07:03:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.830 07:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.830 07:03:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.830 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:53.830 07:03:08 -- host/discovery.sh@55 -- # sort 00:25:53.830 07:03:08 -- host/discovery.sh@55 -- # xargs 00:25:53.830 07:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.830 07:03:08 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:53.830 07:03:08 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:53.830 07:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.830 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:53.830 07:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.087 07:03:08 -- host/discovery.sh@86 -- # get_subsystem_names 00:25:54.087 07:03:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.087 07:03:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.087 07:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.087 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:54.087 07:03:08 -- host/discovery.sh@59 -- # sort 00:25:54.088 07:03:08 -- host/discovery.sh@59 -- # xargs 00:25:54.088 07:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.088 07:03:08 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:25:54.088 07:03:08 -- host/discovery.sh@87 -- # get_bdev_list 00:25:54.088 07:03:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.088 07:03:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.088 07:03:08 -- host/discovery.sh@55 -- # sort 00:25:54.088 07:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.088 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:54.088 07:03:08 -- host/discovery.sh@55 -- # xargs 00:25:54.088 07:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.088 07:03:08 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:54.088 07:03:08 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:54.088 07:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.088 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:54.088 [2024-05-15 07:03:08.148714] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.088 07:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.088 07:03:08 -- host/discovery.sh@92 -- # get_subsystem_names 00:25:54.088 07:03:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:54.088 07:03:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:54.088 07:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.088 07:03:08 -- host/discovery.sh@59 -- # sort 00:25:54.088 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:54.088 07:03:08 -- host/discovery.sh@59 -- # xargs 00:25:54.088 07:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.088 07:03:08 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:54.088 07:03:08 -- host/discovery.sh@93 -- # get_bdev_list 00:25:54.088 07:03:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.088 07:03:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:54.088 07:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.088 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:54.088 07:03:08 -- host/discovery.sh@55 -- # sort 00:25:54.088 07:03:08 -- host/discovery.sh@55 -- # xargs 00:25:54.088 07:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.088 07:03:08 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:25:54.088 07:03:08 -- host/discovery.sh@94 -- # get_notification_count 00:25:54.088 07:03:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:54.088 07:03:08 -- host/discovery.sh@74 -- # jq '. | length' 00:25:54.088 07:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.088 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:54.088 07:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.088 07:03:08 -- host/discovery.sh@74 -- # notification_count=0 00:25:54.088 07:03:08 -- host/discovery.sh@75 -- # notify_id=0 00:25:54.088 07:03:08 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:25:54.088 07:03:08 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:54.088 07:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.088 07:03:08 -- common/autotest_common.sh@10 -- # set +x 00:25:54.088 07:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.088 07:03:08 -- host/discovery.sh@100 -- # sleep 1 00:25:55.020 [2024-05-15 07:03:08.907611] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:55.020 [2024-05-15 07:03:08.907641] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:55.020 [2024-05-15 07:03:08.907666] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:55.020 [2024-05-15 07:03:08.994982] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:55.020 [2024-05-15 07:03:09.097686] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:55.020 [2024-05-15 07:03:09.097714] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:55.278 07:03:09 -- host/discovery.sh@101 -- # get_subsystem_names 00:25:55.278 07:03:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.278 07:03:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.278 07:03:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.278 07:03:09 -- common/autotest_common.sh@10 -- # set +x 00:25:55.278 07:03:09 -- host/discovery.sh@59 -- # sort 00:25:55.278 07:03:09 -- host/discovery.sh@59 -- # xargs 00:25:55.278 07:03:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.278 07:03:09 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.278 07:03:09 -- host/discovery.sh@102 -- # get_bdev_list 00:25:55.278 07:03:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.278 07:03:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.278 07:03:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.278 07:03:09 -- common/autotest_common.sh@10 -- # set +x 00:25:55.278 07:03:09 -- host/discovery.sh@55 -- # sort 00:25:55.278 07:03:09 -- host/discovery.sh@55 -- # xargs 00:25:55.278 07:03:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.278 07:03:09 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:55.278 07:03:09 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:25:55.278 07:03:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:55.278 07:03:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:55.278 07:03:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.278 07:03:09 -- common/autotest_common.sh@10 -- # set +x 00:25:55.278 07:03:09 -- host/discovery.sh@63 -- # sort -n 00:25:55.278 07:03:09 -- host/discovery.sh@63 -- # xargs 00:25:55.279 07:03:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.279 07:03:09 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:25:55.279 07:03:09 -- host/discovery.sh@104 -- # get_notification_count 00:25:55.279 07:03:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:55.279 07:03:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.279 07:03:09 -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.279 07:03:09 -- common/autotest_common.sh@10 -- # set +x 00:25:55.279 07:03:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.279 07:03:09 -- host/discovery.sh@74 -- # notification_count=1 00:25:55.279 07:03:09 -- host/discovery.sh@75 -- # notify_id=1 00:25:55.279 07:03:09 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:25:55.279 07:03:09 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:55.279 07:03:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:55.279 07:03:09 -- common/autotest_common.sh@10 -- # set +x 00:25:55.279 07:03:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:55.279 07:03:09 -- host/discovery.sh@109 -- # sleep 1 00:25:56.210 07:03:10 -- host/discovery.sh@110 -- # get_bdev_list 00:25:56.468 07:03:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.468 07:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.468 07:03:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.468 07:03:10 -- common/autotest_common.sh@10 -- # set +x 00:25:56.468 07:03:10 -- host/discovery.sh@55 -- # sort 00:25:56.468 07:03:10 -- host/discovery.sh@55 -- # xargs 00:25:56.468 07:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.468 07:03:10 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:56.468 07:03:10 -- host/discovery.sh@111 -- # get_notification_count 00:25:56.468 07:03:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:56.468 07:03:10 -- host/discovery.sh@74 -- # jq '. | length' 00:25:56.468 07:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.468 07:03:10 -- common/autotest_common.sh@10 -- # set +x 00:25:56.468 07:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.468 07:03:10 -- host/discovery.sh@74 -- # notification_count=1 00:25:56.468 07:03:10 -- host/discovery.sh@75 -- # notify_id=2 00:25:56.468 07:03:10 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:25:56.468 07:03:10 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:56.468 07:03:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:56.468 07:03:10 -- common/autotest_common.sh@10 -- # set +x 00:25:56.468 [2024-05-15 07:03:10.531809] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:56.468 [2024-05-15 07:03:10.532484] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:56.468 [2024-05-15 07:03:10.532544] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:56.468 07:03:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:56.468 07:03:10 -- host/discovery.sh@117 -- # sleep 1 00:25:56.468 [2024-05-15 07:03:10.659880] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:56.726 [2024-05-15 07:03:10.926406] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:56.726 [2024-05-15 07:03:10.926437] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:56.726 [2024-05-15 07:03:10.926448] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:57.659 07:03:11 -- host/discovery.sh@118 -- # get_subsystem_names 00:25:57.659 07:03:11 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.659 07:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.659 07:03:11 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.659 07:03:11 -- common/autotest_common.sh@10 -- # set +x 00:25:57.659 07:03:11 -- host/discovery.sh@59 -- # sort 00:25:57.659 07:03:11 -- host/discovery.sh@59 -- # xargs 00:25:57.659 07:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.659 07:03:11 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.659 07:03:11 -- host/discovery.sh@119 -- # get_bdev_list 00:25:57.659 07:03:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.659 07:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.659 07:03:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.659 07:03:11 -- common/autotest_common.sh@10 -- # set +x 00:25:57.659 07:03:11 -- host/discovery.sh@55 -- # sort 00:25:57.659 07:03:11 -- host/discovery.sh@55 -- # xargs 00:25:57.659 07:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.660 07:03:11 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:57.660 07:03:11 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:25:57.660 07:03:11 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:57.660 07:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.660 07:03:11 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:57.660 07:03:11 -- common/autotest_common.sh@10 -- # set +x 00:25:57.660 07:03:11 -- host/discovery.sh@63 -- # sort -n 00:25:57.660 07:03:11 -- host/discovery.sh@63 -- # xargs 00:25:57.660 07:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.660 07:03:11 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:57.660 07:03:11 -- host/discovery.sh@121 -- # get_notification_count 00:25:57.660 07:03:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:57.660 07:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.660 07:03:11 -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.660 07:03:11 -- common/autotest_common.sh@10 -- # set +x 00:25:57.660 07:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.660 07:03:11 -- host/discovery.sh@74 -- # notification_count=0 00:25:57.660 07:03:11 -- host/discovery.sh@75 -- # notify_id=2 00:25:57.660 07:03:11 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:25:57.660 07:03:11 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:57.660 07:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.660 07:03:11 -- common/autotest_common.sh@10 -- # set +x 00:25:57.660 [2024-05-15 07:03:11.711797] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:57.660 [2024-05-15 07:03:11.711844] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:57.660 07:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.660 07:03:11 -- host/discovery.sh@127 -- # sleep 1 00:25:57.660 [2024-05-15 07:03:11.718350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.660 [2024-05-15 07:03:11.718384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.660 [2024-05-15 07:03:11.718403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.660 [2024-05-15 07:03:11.718419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.660 [2024-05-15 07:03:11.718435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.660 [2024-05-15 07:03:11.718457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.660 [2024-05-15 07:03:11.718474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.660 [2024-05-15 07:03:11.718489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.660 [2024-05-15 07:03:11.718504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b0b70 is same with the state(5) to be set 00:25:57.660 [2024-05-15 07:03:11.728353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b0b70 (9): Bad file descriptor 00:25:57.660 [2024-05-15 07:03:11.738406] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:57.660 [2024-05-15 07:03:11.738685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.738920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.738957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b0b70 with addr=10.0.0.2, port=4420 00:25:57.660 [2024-05-15 07:03:11.738991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b0b70 is same with the state(5) to be set 00:25:57.660 [2024-05-15 07:03:11.739015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b0b70 (9): Bad file descriptor 00:25:57.660 [2024-05-15 07:03:11.739056] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:57.660 [2024-05-15 07:03:11.739074] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:57.660 [2024-05-15 07:03:11.739089] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:57.660 [2024-05-15 07:03:11.739111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.660 [2024-05-15 07:03:11.748487] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:57.660 [2024-05-15 07:03:11.748811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.749074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.749102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b0b70 with addr=10.0.0.2, port=4420 00:25:57.660 [2024-05-15 07:03:11.749118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b0b70 is same with the state(5) to be set 00:25:57.660 [2024-05-15 07:03:11.749141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b0b70 (9): Bad file descriptor 00:25:57.660 [2024-05-15 07:03:11.749181] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:57.660 [2024-05-15 07:03:11.749199] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:57.660 [2024-05-15 07:03:11.749213] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:57.660 [2024-05-15 07:03:11.749232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.660 [2024-05-15 07:03:11.758564] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:57.660 [2024-05-15 07:03:11.758820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.759066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.759093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b0b70 with addr=10.0.0.2, port=4420 00:25:57.660 [2024-05-15 07:03:11.759110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b0b70 is same with the state(5) to be set 00:25:57.660 [2024-05-15 07:03:11.759138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b0b70 (9): Bad file descriptor 00:25:57.660 [2024-05-15 07:03:11.759159] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:57.660 [2024-05-15 07:03:11.759187] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:57.660 [2024-05-15 07:03:11.759200] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:57.660 [2024-05-15 07:03:11.759219] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.660 [2024-05-15 07:03:11.768645] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:57.660 [2024-05-15 07:03:11.768904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.769118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.769144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b0b70 with addr=10.0.0.2, port=4420 00:25:57.660 [2024-05-15 07:03:11.769160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b0b70 is same with the state(5) to be set 00:25:57.660 [2024-05-15 07:03:11.769182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b0b70 (9): Bad file descriptor 00:25:57.660 [2024-05-15 07:03:11.769203] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:57.660 [2024-05-15 07:03:11.769217] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:57.660 [2024-05-15 07:03:11.769251] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:57.660 [2024-05-15 07:03:11.769272] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.660 [2024-05-15 07:03:11.778722] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:57.660 [2024-05-15 07:03:11.778975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.779178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.779219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b0b70 with addr=10.0.0.2, port=4420 00:25:57.660 [2024-05-15 07:03:11.779237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b0b70 is same with the state(5) to be set 00:25:57.660 [2024-05-15 07:03:11.779261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b0b70 (9): Bad file descriptor 00:25:57.660 [2024-05-15 07:03:11.779283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:57.660 [2024-05-15 07:03:11.779299] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:57.660 [2024-05-15 07:03:11.779314] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:57.660 [2024-05-15 07:03:11.779335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.660 [2024-05-15 07:03:11.788798] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:57.660 [2024-05-15 07:03:11.789080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.789248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-05-15 07:03:11.789273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b0b70 with addr=10.0.0.2, port=4420 00:25:57.660 [2024-05-15 07:03:11.789289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b0b70 is same with the state(5) to be set 00:25:57.660 [2024-05-15 07:03:11.789311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b0b70 (9): Bad file descriptor 00:25:57.660 [2024-05-15 07:03:11.789336] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:57.660 [2024-05-15 07:03:11.789351] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:57.660 [2024-05-15 07:03:11.789364] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:57.660 [2024-05-15 07:03:11.789383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.661 [2024-05-15 07:03:11.798444] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:57.661 [2024-05-15 07:03:11.798476] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:58.593 07:03:12 -- host/discovery.sh@128 -- # get_subsystem_names 00:25:58.594 07:03:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:58.594 07:03:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:58.594 07:03:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.594 07:03:12 -- common/autotest_common.sh@10 -- # set +x 00:25:58.594 07:03:12 -- host/discovery.sh@59 -- # sort 00:25:58.594 07:03:12 -- host/discovery.sh@59 -- # xargs 00:25:58.594 07:03:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.594 07:03:12 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.594 07:03:12 -- host/discovery.sh@129 -- # get_bdev_list 00:25:58.594 07:03:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.594 07:03:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.594 07:03:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.594 07:03:12 -- common/autotest_common.sh@10 -- # set +x 00:25:58.594 07:03:12 -- host/discovery.sh@55 -- # sort 00:25:58.594 07:03:12 -- host/discovery.sh@55 -- # xargs 00:25:58.594 07:03:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.594 07:03:12 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.594 07:03:12 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:25:58.594 07:03:12 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.594 07:03:12 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.594 07:03:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.594 07:03:12 -- common/autotest_common.sh@10 -- # set +x 00:25:58.594 07:03:12 -- host/discovery.sh@63 -- # sort -n 00:25:58.594 07:03:12 -- host/discovery.sh@63 -- # xargs 00:25:58.594 07:03:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.852 07:03:12 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:25:58.852 07:03:12 -- host/discovery.sh@131 -- # get_notification_count 00:25:58.852 07:03:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:58.852 07:03:12 -- host/discovery.sh@74 -- # jq '. | length' 00:25:58.852 07:03:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.852 07:03:12 -- common/autotest_common.sh@10 -- # set +x 00:25:58.852 07:03:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.852 07:03:12 -- host/discovery.sh@74 -- # notification_count=0 00:25:58.852 07:03:12 -- host/discovery.sh@75 -- # notify_id=2 00:25:58.852 07:03:12 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:25:58.852 07:03:12 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:58.852 07:03:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:58.852 07:03:12 -- common/autotest_common.sh@10 -- # set +x 00:25:58.852 07:03:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:58.852 07:03:12 -- host/discovery.sh@135 -- # sleep 1 00:25:59.784 07:03:13 -- host/discovery.sh@136 -- # get_subsystem_names 00:25:59.784 07:03:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.784 07:03:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.784 07:03:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.784 07:03:13 -- common/autotest_common.sh@10 -- # set +x 00:25:59.784 07:03:13 -- host/discovery.sh@59 -- # sort 00:25:59.784 07:03:13 -- host/discovery.sh@59 -- # xargs 00:25:59.784 07:03:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.784 07:03:13 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:25:59.784 07:03:13 -- host/discovery.sh@137 -- # get_bdev_list 00:25:59.784 07:03:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.784 07:03:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.784 07:03:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.784 07:03:13 -- common/autotest_common.sh@10 -- # set +x 00:25:59.784 07:03:13 -- host/discovery.sh@55 -- # sort 00:25:59.784 07:03:13 -- host/discovery.sh@55 -- # xargs 00:25:59.784 07:03:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:59.784 07:03:13 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:25:59.784 07:03:14 -- host/discovery.sh@138 -- # get_notification_count 00:25:59.784 07:03:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.784 07:03:14 -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.784 07:03:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:59.784 07:03:14 -- common/autotest_common.sh@10 -- # set +x 00:25:59.784 07:03:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.042 07:03:14 -- host/discovery.sh@74 -- # notification_count=2 00:26:00.042 07:03:14 -- host/discovery.sh@75 -- # notify_id=4 00:26:00.042 07:03:14 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:26:00.042 07:03:14 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:00.042 07:03:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.042 07:03:14 -- common/autotest_common.sh@10 -- # set +x 00:26:01.009 [2024-05-15 07:03:15.094232] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:01.009 [2024-05-15 07:03:15.094257] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:01.009 [2024-05-15 07:03:15.094295] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:01.009 [2024-05-15 07:03:15.180572] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:01.267 [2024-05-15 07:03:15.489726] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:01.267 [2024-05-15 07:03:15.489771] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:01.267 07:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.267 07:03:15 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.267 07:03:15 -- common/autotest_common.sh@640 -- # local es=0 00:26:01.267 07:03:15 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.267 07:03:15 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:01.267 07:03:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:01.267 07:03:15 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:01.267 07:03:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:01.267 07:03:15 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.267 07:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.267 07:03:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.524 request: 00:26:01.524 { 00:26:01.524 "name": "nvme", 00:26:01.524 "trtype": "tcp", 00:26:01.524 "traddr": "10.0.0.2", 00:26:01.524 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.524 "adrfam": "ipv4", 00:26:01.524 "trsvcid": "8009", 00:26:01.524 "wait_for_attach": true, 00:26:01.524 "method": "bdev_nvme_start_discovery", 00:26:01.524 "req_id": 1 00:26:01.524 } 00:26:01.524 Got JSON-RPC error response 00:26:01.524 response: 00:26:01.524 { 00:26:01.524 "code": -17, 00:26:01.524 "message": "File exists" 00:26:01.524 } 00:26:01.524 07:03:15 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:01.524 07:03:15 -- common/autotest_common.sh@643 -- # es=1 00:26:01.524 07:03:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:01.524 07:03:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:01.524 07:03:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:01.524 07:03:15 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:26:01.524 07:03:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.524 07:03:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.524 07:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.525 07:03:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.525 07:03:15 -- host/discovery.sh@67 -- # sort 00:26:01.525 07:03:15 -- host/discovery.sh@67 -- # xargs 00:26:01.525 07:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.525 07:03:15 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:26:01.525 07:03:15 -- host/discovery.sh@147 -- # get_bdev_list 00:26:01.525 07:03:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.525 07:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.525 07:03:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.525 07:03:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.525 07:03:15 -- host/discovery.sh@55 -- # sort 00:26:01.525 07:03:15 -- host/discovery.sh@55 -- # xargs 00:26:01.525 07:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.525 07:03:15 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.525 07:03:15 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.525 07:03:15 -- common/autotest_common.sh@640 -- # local es=0 00:26:01.525 07:03:15 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.525 07:03:15 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:01.525 07:03:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:01.525 07:03:15 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:01.525 07:03:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:01.525 07:03:15 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.525 07:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.525 07:03:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.525 request: 00:26:01.525 { 00:26:01.525 "name": "nvme_second", 00:26:01.525 "trtype": "tcp", 00:26:01.525 "traddr": "10.0.0.2", 00:26:01.525 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.525 "adrfam": "ipv4", 00:26:01.525 "trsvcid": "8009", 00:26:01.525 "wait_for_attach": true, 00:26:01.525 "method": "bdev_nvme_start_discovery", 00:26:01.525 "req_id": 1 00:26:01.525 } 00:26:01.525 Got JSON-RPC error response 00:26:01.525 response: 00:26:01.525 { 00:26:01.525 "code": -17, 00:26:01.525 "message": "File exists" 00:26:01.525 } 00:26:01.525 07:03:15 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:01.525 07:03:15 -- common/autotest_common.sh@643 -- # es=1 00:26:01.525 07:03:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:01.525 07:03:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:01.525 07:03:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:01.525 07:03:15 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:26:01.525 07:03:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.525 07:03:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.525 07:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.525 07:03:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.525 07:03:15 -- host/discovery.sh@67 -- # sort 00:26:01.525 07:03:15 -- host/discovery.sh@67 -- # xargs 00:26:01.525 07:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.525 07:03:15 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:26:01.525 07:03:15 -- host/discovery.sh@153 -- # get_bdev_list 00:26:01.525 07:03:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.525 07:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.525 07:03:15 -- common/autotest_common.sh@10 -- # set +x 00:26:01.525 07:03:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.525 07:03:15 -- host/discovery.sh@55 -- # sort 00:26:01.525 07:03:15 -- host/discovery.sh@55 -- # xargs 00:26:01.525 07:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.525 07:03:15 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.525 07:03:15 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.525 07:03:15 -- common/autotest_common.sh@640 -- # local es=0 00:26:01.525 07:03:15 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.525 07:03:15 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:01.525 07:03:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:01.525 07:03:15 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:01.525 07:03:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:01.525 07:03:15 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.525 07:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.525 07:03:15 -- common/autotest_common.sh@10 -- # set +x 00:26:02.898 [2024-05-15 07:03:16.697849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-05-15 07:03:16.698137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.898 [2024-05-15 07:03:16.698166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2221df0 with addr=10.0.0.2, port=8010 00:26:02.898 [2024-05-15 07:03:16.698197] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:02.898 [2024-05-15 07:03:16.698211] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:02.898 [2024-05-15 07:03:16.698224] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:03.831 [2024-05-15 07:03:17.700266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.831 [2024-05-15 07:03:17.700520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.831 [2024-05-15 07:03:17.700547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2221df0 with addr=10.0.0.2, port=8010 00:26:03.831 [2024-05-15 07:03:17.700577] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:03.831 [2024-05-15 07:03:17.700592] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:03.831 [2024-05-15 07:03:17.700606] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:04.762 [2024-05-15 07:03:18.702364] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:04.762 request: 00:26:04.762 { 00:26:04.762 "name": "nvme_second", 00:26:04.762 "trtype": "tcp", 00:26:04.762 "traddr": "10.0.0.2", 00:26:04.762 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:04.762 "adrfam": "ipv4", 00:26:04.762 "trsvcid": "8010", 00:26:04.762 "attach_timeout_ms": 3000, 00:26:04.762 "method": "bdev_nvme_start_discovery", 00:26:04.762 "req_id": 1 00:26:04.762 } 00:26:04.762 Got JSON-RPC error response 00:26:04.762 response: 00:26:04.762 { 00:26:04.762 "code": -110, 00:26:04.763 "message": "Connection timed out" 00:26:04.763 } 00:26:04.763 07:03:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:04.763 07:03:18 -- common/autotest_common.sh@643 -- # es=1 00:26:04.763 07:03:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:04.763 07:03:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:04.763 07:03:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:04.763 07:03:18 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:26:04.763 07:03:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:04.763 07:03:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.763 07:03:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:04.763 07:03:18 -- common/autotest_common.sh@10 -- # set +x 00:26:04.763 07:03:18 -- host/discovery.sh@67 -- # sort 00:26:04.763 07:03:18 -- host/discovery.sh@67 -- # xargs 00:26:04.763 07:03:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.763 07:03:18 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:26:04.763 07:03:18 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:26:04.763 07:03:18 -- host/discovery.sh@162 -- # kill 609190 00:26:04.763 07:03:18 -- host/discovery.sh@163 -- # nvmftestfini 00:26:04.763 07:03:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:04.763 07:03:18 -- nvmf/common.sh@116 -- # sync 00:26:04.763 07:03:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:04.763 07:03:18 -- nvmf/common.sh@119 -- # set +e 00:26:04.763 07:03:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:04.763 07:03:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:04.763 rmmod nvme_tcp 00:26:04.763 rmmod nvme_fabrics 00:26:04.763 rmmod nvme_keyring 00:26:04.763 07:03:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:04.763 07:03:18 -- nvmf/common.sh@123 -- # set -e 00:26:04.763 07:03:18 -- nvmf/common.sh@124 -- # return 0 00:26:04.763 07:03:18 -- nvmf/common.sh@477 -- # '[' -n 609037 ']' 00:26:04.763 07:03:18 -- nvmf/common.sh@478 -- # killprocess 609037 00:26:04.763 07:03:18 -- common/autotest_common.sh@926 -- # '[' -z 609037 ']' 00:26:04.763 07:03:18 -- common/autotest_common.sh@930 -- # kill -0 609037 00:26:04.763 07:03:18 -- common/autotest_common.sh@931 -- # uname 00:26:04.763 07:03:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:04.763 07:03:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 609037 00:26:04.763 07:03:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:04.763 07:03:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:04.763 07:03:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 609037' 00:26:04.763 killing process with pid 609037 00:26:04.763 07:03:18 -- common/autotest_common.sh@945 -- # kill 609037 00:26:04.763 07:03:18 -- common/autotest_common.sh@950 -- # wait 609037 00:26:05.021 07:03:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:05.021 07:03:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:05.021 07:03:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:05.021 07:03:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:05.021 07:03:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:05.021 07:03:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.021 07:03:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.021 07:03:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.925 07:03:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:06.925 00:26:06.925 real 0m17.845s 00:26:06.925 user 0m27.336s 00:26:06.925 sys 0m3.168s 00:26:06.925 07:03:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.925 07:03:21 -- common/autotest_common.sh@10 -- # set +x 00:26:06.925 ************************************ 00:26:06.925 END TEST nvmf_discovery 00:26:06.925 ************************************ 00:26:06.925 07:03:21 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:06.925 07:03:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:06.925 07:03:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:06.925 07:03:21 -- common/autotest_common.sh@10 -- # set +x 00:26:07.184 ************************************ 00:26:07.184 START TEST nvmf_discovery_remove_ifc 00:26:07.184 ************************************ 00:26:07.184 07:03:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:07.184 * Looking for test storage... 00:26:07.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.184 07:03:21 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.184 07:03:21 -- nvmf/common.sh@7 -- # uname -s 00:26:07.184 07:03:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.184 07:03:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.184 07:03:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.184 07:03:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.184 07:03:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.184 07:03:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.184 07:03:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.184 07:03:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.184 07:03:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.184 07:03:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.184 07:03:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:07.184 07:03:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:07.184 07:03:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.184 07:03:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.184 07:03:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.184 07:03:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.184 07:03:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.184 07:03:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.184 07:03:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.184 07:03:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.184 07:03:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.185 07:03:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.185 07:03:21 -- paths/export.sh@5 -- # export PATH 00:26:07.185 07:03:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.185 07:03:21 -- nvmf/common.sh@46 -- # : 0 00:26:07.185 07:03:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:07.185 07:03:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:07.185 07:03:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:07.185 07:03:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.185 07:03:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.185 07:03:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:07.185 07:03:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:07.185 07:03:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:07.185 07:03:21 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:07.185 07:03:21 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:07.185 07:03:21 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:07.185 07:03:21 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:07.185 07:03:21 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:07.185 07:03:21 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:07.185 07:03:21 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:07.185 07:03:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:07.185 07:03:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.185 07:03:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:07.185 07:03:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:07.185 07:03:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:07.185 07:03:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.185 07:03:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.185 07:03:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.185 07:03:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:07.185 07:03:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:07.185 07:03:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:07.185 07:03:21 -- common/autotest_common.sh@10 -- # set +x 00:26:09.718 07:03:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:09.718 07:03:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:09.718 07:03:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:09.718 07:03:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:09.718 07:03:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:09.718 07:03:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:09.718 07:03:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:09.718 07:03:23 -- nvmf/common.sh@294 -- # net_devs=() 00:26:09.718 07:03:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:09.718 07:03:23 -- nvmf/common.sh@295 -- # e810=() 00:26:09.718 07:03:23 -- nvmf/common.sh@295 -- # local -ga e810 00:26:09.718 07:03:23 -- nvmf/common.sh@296 -- # x722=() 00:26:09.718 07:03:23 -- nvmf/common.sh@296 -- # local -ga x722 00:26:09.718 07:03:23 -- nvmf/common.sh@297 -- # mlx=() 00:26:09.718 07:03:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:09.718 07:03:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.718 07:03:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.718 07:03:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.718 07:03:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.718 07:03:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.718 07:03:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.718 07:03:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.718 07:03:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.718 07:03:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.718 07:03:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.718 07:03:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.718 07:03:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:09.718 07:03:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:09.718 07:03:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:09.718 07:03:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:09.718 07:03:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:09.718 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:09.718 07:03:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:09.718 07:03:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:09.718 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:09.718 07:03:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:09.718 07:03:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:09.718 07:03:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.718 07:03:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:09.718 07:03:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.718 07:03:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:09.718 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:09.718 07:03:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.718 07:03:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:09.718 07:03:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.718 07:03:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:09.718 07:03:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.718 07:03:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:09.718 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:09.718 07:03:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.718 07:03:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:09.718 07:03:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:09.718 07:03:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:09.718 07:03:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:09.718 07:03:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:09.718 07:03:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.718 07:03:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:09.718 07:03:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:09.718 07:03:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:09.718 07:03:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:09.718 07:03:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:09.718 07:03:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:09.718 07:03:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:09.718 07:03:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:09.718 07:03:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:09.718 07:03:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:09.718 07:03:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:09.718 07:03:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:09.718 07:03:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:09.718 07:03:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:09.718 07:03:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:09.718 07:03:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:09.718 07:03:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:09.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:26:09.718 00:26:09.718 --- 10.0.0.2 ping statistics --- 00:26:09.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.718 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:26:09.718 07:03:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:09.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:26:09.718 00:26:09.718 --- 10.0.0.1 ping statistics --- 00:26:09.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.718 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:26:09.718 07:03:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.718 07:03:23 -- nvmf/common.sh@410 -- # return 0 00:26:09.718 07:03:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:09.718 07:03:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.718 07:03:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:09.718 07:03:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.718 07:03:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:09.718 07:03:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:09.976 07:03:23 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:09.976 07:03:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:09.976 07:03:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:09.976 07:03:23 -- common/autotest_common.sh@10 -- # set +x 00:26:09.976 07:03:23 -- nvmf/common.sh@469 -- # nvmfpid=613585 00:26:09.976 07:03:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:09.976 07:03:23 -- nvmf/common.sh@470 -- # waitforlisten 613585 00:26:09.976 07:03:23 -- common/autotest_common.sh@819 -- # '[' -z 613585 ']' 00:26:09.976 07:03:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.976 07:03:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:09.976 07:03:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.976 07:03:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:09.976 07:03:23 -- common/autotest_common.sh@10 -- # set +x 00:26:09.976 [2024-05-15 07:03:24.007848] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:09.976 [2024-05-15 07:03:24.007938] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.976 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.976 [2024-05-15 07:03:24.082367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.976 [2024-05-15 07:03:24.185832] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:09.976 [2024-05-15 07:03:24.186009] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.976 [2024-05-15 07:03:24.186028] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.976 [2024-05-15 07:03:24.186041] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.976 [2024-05-15 07:03:24.186068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.909 07:03:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:10.909 07:03:24 -- common/autotest_common.sh@852 -- # return 0 00:26:10.909 07:03:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:10.909 07:03:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:10.909 07:03:24 -- common/autotest_common.sh@10 -- # set +x 00:26:10.909 07:03:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.909 07:03:24 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:10.909 07:03:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.909 07:03:24 -- common/autotest_common.sh@10 -- # set +x 00:26:10.909 [2024-05-15 07:03:24.992877] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.909 [2024-05-15 07:03:25.001066] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:10.909 null0 00:26:10.909 [2024-05-15 07:03:25.033013] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.909 07:03:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.909 07:03:25 -- host/discovery_remove_ifc.sh@59 -- # hostpid=613742 00:26:10.909 07:03:25 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:10.909 07:03:25 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 613742 /tmp/host.sock 00:26:10.909 07:03:25 -- common/autotest_common.sh@819 -- # '[' -z 613742 ']' 00:26:10.909 07:03:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:26:10.909 07:03:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:10.910 07:03:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:10.910 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:10.910 07:03:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:10.910 07:03:25 -- common/autotest_common.sh@10 -- # set +x 00:26:10.910 [2024-05-15 07:03:25.092579] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:10.910 [2024-05-15 07:03:25.092658] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613742 ] 00:26:10.910 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.167 [2024-05-15 07:03:25.165209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.167 [2024-05-15 07:03:25.279836] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:11.167 [2024-05-15 07:03:25.280026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.167 07:03:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:11.167 07:03:25 -- common/autotest_common.sh@852 -- # return 0 00:26:11.167 07:03:25 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:11.167 07:03:25 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:11.167 07:03:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.167 07:03:25 -- common/autotest_common.sh@10 -- # set +x 00:26:11.167 07:03:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.167 07:03:25 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:11.167 07:03:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.167 07:03:25 -- common/autotest_common.sh@10 -- # set +x 00:26:11.423 07:03:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:11.423 07:03:25 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:11.423 07:03:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:11.423 07:03:25 -- common/autotest_common.sh@10 -- # set +x 00:26:12.355 [2024-05-15 07:03:26.512157] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:12.355 [2024-05-15 07:03:26.512181] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:12.355 [2024-05-15 07:03:26.512203] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:12.613 [2024-05-15 07:03:26.599527] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:12.613 [2024-05-15 07:03:26.823022] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:12.613 [2024-05-15 07:03:26.823073] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:12.613 [2024-05-15 07:03:26.823108] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:12.613 [2024-05-15 07:03:26.823131] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:12.613 [2024-05-15 07:03:26.823155] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:12.613 07:03:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:12.613 07:03:26 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:12.613 07:03:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.613 07:03:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.613 07:03:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.613 07:03:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:12.613 07:03:26 -- common/autotest_common.sh@10 -- # set +x 00:26:12.613 07:03:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.613 07:03:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.613 [2024-05-15 07:03:26.829774] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xeef220 was disconnected and freed. delete nvme_qpair. 00:26:12.613 07:03:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:12.871 07:03:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:12.871 07:03:26 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:12.871 07:03:26 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:12.871 07:03:26 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:12.871 07:03:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.871 07:03:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.871 07:03:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.871 07:03:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:12.871 07:03:26 -- common/autotest_common.sh@10 -- # set +x 00:26:12.871 07:03:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.871 07:03:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.871 07:03:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:12.871 07:03:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:12.871 07:03:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.806 07:03:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.806 07:03:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.806 07:03:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.806 07:03:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.806 07:03:27 -- common/autotest_common.sh@10 -- # set +x 00:26:13.806 07:03:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.806 07:03:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.806 07:03:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.806 07:03:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:13.806 07:03:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.179 07:03:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.179 07:03:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.179 07:03:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.179 07:03:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:15.179 07:03:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.179 07:03:29 -- common/autotest_common.sh@10 -- # set +x 00:26:15.179 07:03:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.179 07:03:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:15.179 07:03:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:15.179 07:03:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.112 07:03:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.112 07:03:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.112 07:03:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.112 07:03:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.112 07:03:30 -- common/autotest_common.sh@10 -- # set +x 00:26:16.112 07:03:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.112 07:03:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.112 07:03:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.112 07:03:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:16.112 07:03:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.076 07:03:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.076 07:03:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.076 07:03:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.076 07:03:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.076 07:03:31 -- common/autotest_common.sh@10 -- # set +x 00:26:17.076 07:03:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.076 07:03:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.076 07:03:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.076 07:03:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:17.076 07:03:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.007 07:03:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.007 07:03:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.007 07:03:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.007 07:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.007 07:03:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.007 07:03:32 -- common/autotest_common.sh@10 -- # set +x 00:26:18.007 07:03:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:18.007 07:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.007 07:03:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:18.007 07:03:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.264 [2024-05-15 07:03:32.264031] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:18.264 [2024-05-15 07:03:32.264103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.264 [2024-05-15 07:03:32.264124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.264 [2024-05-15 07:03:32.264141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.264 [2024-05-15 07:03:32.264155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.264 [2024-05-15 07:03:32.264168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.264 [2024-05-15 07:03:32.264182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.264 [2024-05-15 07:03:32.264196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.264 [2024-05-15 07:03:32.264209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.264 [2024-05-15 07:03:32.264238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.264 [2024-05-15 07:03:32.264252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.264 [2024-05-15 07:03:32.264270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb5800 is same with the state(5) to be set 00:26:18.264 [2024-05-15 07:03:32.274051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb5800 (9): Bad file descriptor 00:26:18.265 [2024-05-15 07:03:32.284099] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:19.197 07:03:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:19.197 07:03:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.197 07:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.197 07:03:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:19.197 07:03:33 -- common/autotest_common.sh@10 -- # set +x 00:26:19.197 07:03:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:19.197 07:03:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.197 [2024-05-15 07:03:33.324995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:20.128 [2024-05-15 07:03:34.348961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:20.128 [2024-05-15 07:03:34.349012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb5800 with addr=10.0.0.2, port=4420 00:26:20.128 [2024-05-15 07:03:34.349037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeb5800 is same with the state(5) to be set 00:26:20.128 [2024-05-15 07:03:34.349463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb5800 (9): Bad file descriptor 00:26:20.128 [2024-05-15 07:03:34.349509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.128 [2024-05-15 07:03:34.349556] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:20.128 [2024-05-15 07:03:34.349595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.128 [2024-05-15 07:03:34.349618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.128 [2024-05-15 07:03:34.349638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.128 [2024-05-15 07:03:34.349653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.128 [2024-05-15 07:03:34.349669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.128 [2024-05-15 07:03:34.349684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.128 [2024-05-15 07:03:34.349699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.128 [2024-05-15 07:03:34.349715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.128 [2024-05-15 07:03:34.349731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.128 [2024-05-15 07:03:34.349747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.128 [2024-05-15 07:03:34.349761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:20.128 [2024-05-15 07:03:34.350072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb5c10 (9): Bad file descriptor 00:26:20.128 [2024-05-15 07:03:34.351089] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:20.128 [2024-05-15 07:03:34.351110] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:20.128 07:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:20.386 07:03:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:20.386 07:03:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.319 07:03:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.319 07:03:35 -- common/autotest_common.sh@10 -- # set +x 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.319 07:03:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.319 07:03:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.319 07:03:35 -- common/autotest_common.sh@10 -- # set +x 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.319 07:03:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:21.319 07:03:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.250 [2024-05-15 07:03:36.408312] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:22.250 [2024-05-15 07:03:36.408339] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:22.250 [2024-05-15 07:03:36.408366] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:22.508 07:03:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.508 07:03:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.508 07:03:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.508 07:03:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.508 07:03:36 -- common/autotest_common.sh@10 -- # set +x 00:26:22.508 07:03:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.508 07:03:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.508 07:03:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.508 07:03:36 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:22.508 07:03:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.508 [2024-05-15 07:03:36.535794] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:22.508 [2024-05-15 07:03:36.717140] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:22.508 [2024-05-15 07:03:36.717184] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:22.508 [2024-05-15 07:03:36.717213] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:22.508 [2024-05-15 07:03:36.717251] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:22.508 [2024-05-15 07:03:36.717267] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:22.508 [2024-05-15 07:03:36.726106] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xec32b0 was disconnected and freed. delete nvme_qpair. 00:26:23.442 07:03:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.442 07:03:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.442 07:03:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.442 07:03:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.442 07:03:37 -- common/autotest_common.sh@10 -- # set +x 00:26:23.442 07:03:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.442 07:03:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.442 07:03:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.442 07:03:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:23.442 07:03:37 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:23.442 07:03:37 -- host/discovery_remove_ifc.sh@90 -- # killprocess 613742 00:26:23.442 07:03:37 -- common/autotest_common.sh@926 -- # '[' -z 613742 ']' 00:26:23.442 07:03:37 -- common/autotest_common.sh@930 -- # kill -0 613742 00:26:23.442 07:03:37 -- common/autotest_common.sh@931 -- # uname 00:26:23.442 07:03:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:23.442 07:03:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 613742 00:26:23.442 07:03:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:23.442 07:03:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:23.442 07:03:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 613742' 00:26:23.442 killing process with pid 613742 00:26:23.442 07:03:37 -- common/autotest_common.sh@945 -- # kill 613742 00:26:23.442 07:03:37 -- common/autotest_common.sh@950 -- # wait 613742 00:26:23.700 07:03:37 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:23.700 07:03:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:23.700 07:03:37 -- nvmf/common.sh@116 -- # sync 00:26:23.700 07:03:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:23.700 07:03:37 -- nvmf/common.sh@119 -- # set +e 00:26:23.700 07:03:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:23.700 07:03:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:23.700 rmmod nvme_tcp 00:26:23.700 rmmod nvme_fabrics 00:26:23.700 rmmod nvme_keyring 00:26:23.700 07:03:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:23.700 07:03:37 -- nvmf/common.sh@123 -- # set -e 00:26:23.700 07:03:37 -- nvmf/common.sh@124 -- # return 0 00:26:23.700 07:03:37 -- nvmf/common.sh@477 -- # '[' -n 613585 ']' 00:26:23.700 07:03:37 -- nvmf/common.sh@478 -- # killprocess 613585 00:26:23.700 07:03:37 -- common/autotest_common.sh@926 -- # '[' -z 613585 ']' 00:26:23.700 07:03:37 -- common/autotest_common.sh@930 -- # kill -0 613585 00:26:23.700 07:03:37 -- common/autotest_common.sh@931 -- # uname 00:26:23.700 07:03:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:23.700 07:03:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 613585 00:26:23.957 07:03:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:23.957 07:03:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:23.957 07:03:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 613585' 00:26:23.957 killing process with pid 613585 00:26:23.957 07:03:37 -- common/autotest_common.sh@945 -- # kill 613585 00:26:23.957 07:03:37 -- common/autotest_common.sh@950 -- # wait 613585 00:26:24.214 07:03:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:24.214 07:03:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:24.214 07:03:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:24.214 07:03:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:24.214 07:03:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:24.214 07:03:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.214 07:03:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.214 07:03:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.116 07:03:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:26.116 00:26:26.116 real 0m19.121s 00:26:26.116 user 0m26.008s 00:26:26.116 sys 0m3.410s 00:26:26.116 07:03:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.116 07:03:40 -- common/autotest_common.sh@10 -- # set +x 00:26:26.116 ************************************ 00:26:26.116 END TEST nvmf_discovery_remove_ifc 00:26:26.116 ************************************ 00:26:26.116 07:03:40 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:26:26.116 07:03:40 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:26.116 07:03:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:26.116 07:03:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:26.116 07:03:40 -- common/autotest_common.sh@10 -- # set +x 00:26:26.116 ************************************ 00:26:26.116 START TEST nvmf_digest 00:26:26.116 ************************************ 00:26:26.116 07:03:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:26.374 * Looking for test storage... 00:26:26.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:26.374 07:03:40 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.374 07:03:40 -- nvmf/common.sh@7 -- # uname -s 00:26:26.374 07:03:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.374 07:03:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.374 07:03:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.374 07:03:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.374 07:03:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.374 07:03:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.374 07:03:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.374 07:03:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.374 07:03:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.374 07:03:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.374 07:03:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:26.374 07:03:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:26.374 07:03:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.374 07:03:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.374 07:03:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.374 07:03:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.374 07:03:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.374 07:03:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.374 07:03:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.374 07:03:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.374 07:03:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.374 07:03:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.374 07:03:40 -- paths/export.sh@5 -- # export PATH 00:26:26.374 07:03:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.374 07:03:40 -- nvmf/common.sh@46 -- # : 0 00:26:26.374 07:03:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:26.374 07:03:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:26.374 07:03:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:26.374 07:03:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.374 07:03:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.374 07:03:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:26.374 07:03:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:26.374 07:03:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:26.374 07:03:40 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:26.374 07:03:40 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:26.374 07:03:40 -- host/digest.sh@16 -- # runtime=2 00:26:26.374 07:03:40 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:26:26.374 07:03:40 -- host/digest.sh@132 -- # nvmftestinit 00:26:26.374 07:03:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:26.374 07:03:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.374 07:03:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:26.374 07:03:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:26.374 07:03:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:26.374 07:03:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.374 07:03:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.374 07:03:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.374 07:03:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:26.374 07:03:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:26.374 07:03:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:26.374 07:03:40 -- common/autotest_common.sh@10 -- # set +x 00:26:28.905 07:03:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:28.905 07:03:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:28.905 07:03:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:28.905 07:03:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:28.905 07:03:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:28.905 07:03:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:28.905 07:03:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:28.905 07:03:42 -- nvmf/common.sh@294 -- # net_devs=() 00:26:28.905 07:03:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:28.905 07:03:42 -- nvmf/common.sh@295 -- # e810=() 00:26:28.905 07:03:42 -- nvmf/common.sh@295 -- # local -ga e810 00:26:28.905 07:03:42 -- nvmf/common.sh@296 -- # x722=() 00:26:28.905 07:03:42 -- nvmf/common.sh@296 -- # local -ga x722 00:26:28.905 07:03:42 -- nvmf/common.sh@297 -- # mlx=() 00:26:28.905 07:03:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:28.905 07:03:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.905 07:03:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.905 07:03:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.905 07:03:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.905 07:03:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.905 07:03:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.905 07:03:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.905 07:03:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.905 07:03:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.905 07:03:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.905 07:03:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.905 07:03:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:28.905 07:03:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:28.905 07:03:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:28.905 07:03:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:28.905 07:03:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:28.905 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:28.905 07:03:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:28.905 07:03:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:28.905 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:28.905 07:03:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:28.905 07:03:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:28.905 07:03:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.905 07:03:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:28.905 07:03:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.905 07:03:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:28.905 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:28.905 07:03:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.905 07:03:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:28.905 07:03:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.905 07:03:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:28.905 07:03:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.905 07:03:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:28.905 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:28.905 07:03:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.905 07:03:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:28.905 07:03:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:28.905 07:03:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:28.905 07:03:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.905 07:03:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.905 07:03:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.905 07:03:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:28.905 07:03:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.905 07:03:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.905 07:03:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:28.905 07:03:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.905 07:03:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.905 07:03:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:28.905 07:03:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:28.905 07:03:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.905 07:03:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.905 07:03:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.905 07:03:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.905 07:03:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:28.905 07:03:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.905 07:03:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.905 07:03:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.905 07:03:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:28.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:26:28.905 00:26:28.905 --- 10.0.0.2 ping statistics --- 00:26:28.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.905 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:26:28.905 07:03:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:26:28.905 00:26:28.905 --- 10.0.0.1 ping statistics --- 00:26:28.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.905 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:26:28.905 07:03:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.905 07:03:42 -- nvmf/common.sh@410 -- # return 0 00:26:28.905 07:03:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:28.905 07:03:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.905 07:03:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:28.905 07:03:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.905 07:03:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:28.905 07:03:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:28.905 07:03:42 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:28.905 07:03:42 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:26:28.905 07:03:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:28.905 07:03:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:28.905 07:03:42 -- common/autotest_common.sh@10 -- # set +x 00:26:28.905 ************************************ 00:26:28.905 START TEST nvmf_digest_clean 00:26:28.905 ************************************ 00:26:28.905 07:03:42 -- common/autotest_common.sh@1104 -- # run_digest 00:26:28.905 07:03:42 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:26:28.905 07:03:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:28.905 07:03:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:28.905 07:03:42 -- common/autotest_common.sh@10 -- # set +x 00:26:28.905 07:03:42 -- nvmf/common.sh@469 -- # nvmfpid=617627 00:26:28.905 07:03:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:28.905 07:03:42 -- nvmf/common.sh@470 -- # waitforlisten 617627 00:26:28.905 07:03:42 -- common/autotest_common.sh@819 -- # '[' -z 617627 ']' 00:26:28.906 07:03:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.906 07:03:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.906 07:03:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.906 07:03:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.906 07:03:42 -- common/autotest_common.sh@10 -- # set +x 00:26:28.906 [2024-05-15 07:03:42.980586] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:28.906 [2024-05-15 07:03:42.980672] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.906 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.906 [2024-05-15 07:03:43.056645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.165 [2024-05-15 07:03:43.164812] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:29.165 [2024-05-15 07:03:43.164992] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.166 [2024-05-15 07:03:43.165010] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.166 [2024-05-15 07:03:43.165022] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.166 [2024-05-15 07:03:43.165048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.166 07:03:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:29.166 07:03:43 -- common/autotest_common.sh@852 -- # return 0 00:26:29.166 07:03:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:29.166 07:03:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:29.166 07:03:43 -- common/autotest_common.sh@10 -- # set +x 00:26:29.166 07:03:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.166 07:03:43 -- host/digest.sh@120 -- # common_target_config 00:26:29.166 07:03:43 -- host/digest.sh@43 -- # rpc_cmd 00:26:29.166 07:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:29.166 07:03:43 -- common/autotest_common.sh@10 -- # set +x 00:26:29.166 null0 00:26:29.166 [2024-05-15 07:03:43.328262] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.166 [2024-05-15 07:03:43.352490] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.166 07:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:29.166 07:03:43 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:26:29.166 07:03:43 -- host/digest.sh@77 -- # local rw bs qd 00:26:29.166 07:03:43 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:29.166 07:03:43 -- host/digest.sh@80 -- # rw=randread 00:26:29.166 07:03:43 -- host/digest.sh@80 -- # bs=4096 00:26:29.166 07:03:43 -- host/digest.sh@80 -- # qd=128 00:26:29.166 07:03:43 -- host/digest.sh@82 -- # bperfpid=617703 00:26:29.166 07:03:43 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:29.166 07:03:43 -- host/digest.sh@83 -- # waitforlisten 617703 /var/tmp/bperf.sock 00:26:29.166 07:03:43 -- common/autotest_common.sh@819 -- # '[' -z 617703 ']' 00:26:29.166 07:03:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:29.166 07:03:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:29.166 07:03:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:29.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:29.166 07:03:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:29.166 07:03:43 -- common/autotest_common.sh@10 -- # set +x 00:26:29.166 [2024-05-15 07:03:43.395483] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:29.166 [2024-05-15 07:03:43.395544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617703 ] 00:26:29.423 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.423 [2024-05-15 07:03:43.468616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.423 [2024-05-15 07:03:43.582994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.423 07:03:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:29.423 07:03:43 -- common/autotest_common.sh@852 -- # return 0 00:26:29.423 07:03:43 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:29.423 07:03:43 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:29.423 07:03:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:29.988 07:03:44 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.988 07:03:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:30.246 nvme0n1 00:26:30.503 07:03:44 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:30.503 07:03:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:30.503 Running I/O for 2 seconds... 00:26:32.431 00:26:32.431 Latency(us) 00:26:32.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.431 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:32.431 nvme0n1 : 2.00 19260.08 75.23 0.00 0.00 6639.41 2415.12 12815.93 00:26:32.431 =================================================================================================================== 00:26:32.431 Total : 19260.08 75.23 0.00 0.00 6639.41 2415.12 12815.93 00:26:32.431 0 00:26:32.431 07:03:46 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:32.431 07:03:46 -- host/digest.sh@92 -- # get_accel_stats 00:26:32.431 07:03:46 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:32.431 07:03:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:32.431 07:03:46 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:32.431 | select(.opcode=="crc32c") 00:26:32.431 | "\(.module_name) \(.executed)"' 00:26:32.689 07:03:46 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:32.689 07:03:46 -- host/digest.sh@93 -- # exp_module=software 00:26:32.689 07:03:46 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:32.689 07:03:46 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:32.689 07:03:46 -- host/digest.sh@97 -- # killprocess 617703 00:26:32.689 07:03:46 -- common/autotest_common.sh@926 -- # '[' -z 617703 ']' 00:26:32.689 07:03:46 -- common/autotest_common.sh@930 -- # kill -0 617703 00:26:32.689 07:03:46 -- common/autotest_common.sh@931 -- # uname 00:26:32.689 07:03:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:32.689 07:03:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 617703 00:26:32.689 07:03:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:32.689 07:03:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:32.689 07:03:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 617703' 00:26:32.689 killing process with pid 617703 00:26:32.689 07:03:46 -- common/autotest_common.sh@945 -- # kill 617703 00:26:32.689 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.689 00:26:32.689 Latency(us) 00:26:32.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.689 =================================================================================================================== 00:26:32.689 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.689 07:03:46 -- common/autotest_common.sh@950 -- # wait 617703 00:26:32.947 07:03:47 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:26:32.947 07:03:47 -- host/digest.sh@77 -- # local rw bs qd 00:26:32.947 07:03:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:32.947 07:03:47 -- host/digest.sh@80 -- # rw=randread 00:26:32.947 07:03:47 -- host/digest.sh@80 -- # bs=131072 00:26:32.947 07:03:47 -- host/digest.sh@80 -- # qd=16 00:26:32.947 07:03:47 -- host/digest.sh@82 -- # bperfpid=618127 00:26:32.947 07:03:47 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:32.947 07:03:47 -- host/digest.sh@83 -- # waitforlisten 618127 /var/tmp/bperf.sock 00:26:32.947 07:03:47 -- common/autotest_common.sh@819 -- # '[' -z 618127 ']' 00:26:32.947 07:03:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:32.947 07:03:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:32.947 07:03:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:32.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:32.947 07:03:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:32.947 07:03:47 -- common/autotest_common.sh@10 -- # set +x 00:26:33.205 [2024-05-15 07:03:47.193789] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:33.205 [2024-05-15 07:03:47.193869] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618127 ] 00:26:33.205 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:33.205 Zero copy mechanism will not be used. 00:26:33.205 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.205 [2024-05-15 07:03:47.265634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.205 [2024-05-15 07:03:47.377209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.138 07:03:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:34.138 07:03:48 -- common/autotest_common.sh@852 -- # return 0 00:26:34.138 07:03:48 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:34.138 07:03:48 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:34.138 07:03:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:34.396 07:03:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.396 07:03:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.654 nvme0n1 00:26:34.654 07:03:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:34.654 07:03:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:34.654 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:34.654 Zero copy mechanism will not be used. 00:26:34.654 Running I/O for 2 seconds... 00:26:37.177 00:26:37.177 Latency(us) 00:26:37.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.177 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:37.177 nvme0n1 : 2.00 1645.20 205.65 0.00 0.00 9720.47 9272.13 14757.74 00:26:37.177 =================================================================================================================== 00:26:37.177 Total : 1645.20 205.65 0.00 0.00 9720.47 9272.13 14757.74 00:26:37.177 0 00:26:37.177 07:03:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:37.177 07:03:50 -- host/digest.sh@92 -- # get_accel_stats 00:26:37.177 07:03:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:37.177 07:03:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:37.177 | select(.opcode=="crc32c") 00:26:37.177 | "\(.module_name) \(.executed)"' 00:26:37.177 07:03:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:37.177 07:03:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:37.177 07:03:51 -- host/digest.sh@93 -- # exp_module=software 00:26:37.177 07:03:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:37.177 07:03:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:37.177 07:03:51 -- host/digest.sh@97 -- # killprocess 618127 00:26:37.177 07:03:51 -- common/autotest_common.sh@926 -- # '[' -z 618127 ']' 00:26:37.177 07:03:51 -- common/autotest_common.sh@930 -- # kill -0 618127 00:26:37.177 07:03:51 -- common/autotest_common.sh@931 -- # uname 00:26:37.177 07:03:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:37.177 07:03:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 618127 00:26:37.177 07:03:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:37.177 07:03:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:37.177 07:03:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 618127' 00:26:37.177 killing process with pid 618127 00:26:37.177 07:03:51 -- common/autotest_common.sh@945 -- # kill 618127 00:26:37.177 Received shutdown signal, test time was about 2.000000 seconds 00:26:37.177 00:26:37.177 Latency(us) 00:26:37.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.177 =================================================================================================================== 00:26:37.177 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.177 07:03:51 -- common/autotest_common.sh@950 -- # wait 618127 00:26:37.435 07:03:51 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:26:37.435 07:03:51 -- host/digest.sh@77 -- # local rw bs qd 00:26:37.435 07:03:51 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:37.435 07:03:51 -- host/digest.sh@80 -- # rw=randwrite 00:26:37.435 07:03:51 -- host/digest.sh@80 -- # bs=4096 00:26:37.435 07:03:51 -- host/digest.sh@80 -- # qd=128 00:26:37.435 07:03:51 -- host/digest.sh@82 -- # bperfpid=618679 00:26:37.435 07:03:51 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:37.435 07:03:51 -- host/digest.sh@83 -- # waitforlisten 618679 /var/tmp/bperf.sock 00:26:37.435 07:03:51 -- common/autotest_common.sh@819 -- # '[' -z 618679 ']' 00:26:37.435 07:03:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:37.435 07:03:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:37.435 07:03:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:37.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:37.435 07:03:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:37.435 07:03:51 -- common/autotest_common.sh@10 -- # set +x 00:26:37.435 [2024-05-15 07:03:51.471813] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:37.435 [2024-05-15 07:03:51.471884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618679 ] 00:26:37.435 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.435 [2024-05-15 07:03:51.542596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.435 [2024-05-15 07:03:51.646618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.693 07:03:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:37.693 07:03:51 -- common/autotest_common.sh@852 -- # return 0 00:26:37.693 07:03:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:37.693 07:03:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:37.693 07:03:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:37.950 07:03:52 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:37.950 07:03:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.208 nvme0n1 00:26:38.208 07:03:52 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:38.208 07:03:52 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:38.208 Running I/O for 2 seconds... 00:26:40.732 00:26:40.732 Latency(us) 00:26:40.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.732 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:40.732 nvme0n1 : 2.01 19220.51 75.08 0.00 0.00 6644.93 2864.17 18252.99 00:26:40.732 =================================================================================================================== 00:26:40.732 Total : 19220.51 75.08 0.00 0.00 6644.93 2864.17 18252.99 00:26:40.732 0 00:26:40.732 07:03:54 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:40.732 07:03:54 -- host/digest.sh@92 -- # get_accel_stats 00:26:40.732 07:03:54 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:40.732 07:03:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:40.732 07:03:54 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:40.732 | select(.opcode=="crc32c") 00:26:40.732 | "\(.module_name) \(.executed)"' 00:26:40.732 07:03:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:40.732 07:03:54 -- host/digest.sh@93 -- # exp_module=software 00:26:40.732 07:03:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:40.732 07:03:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:40.732 07:03:54 -- host/digest.sh@97 -- # killprocess 618679 00:26:40.732 07:03:54 -- common/autotest_common.sh@926 -- # '[' -z 618679 ']' 00:26:40.732 07:03:54 -- common/autotest_common.sh@930 -- # kill -0 618679 00:26:40.732 07:03:54 -- common/autotest_common.sh@931 -- # uname 00:26:40.732 07:03:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:40.732 07:03:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 618679 00:26:40.732 07:03:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:40.732 07:03:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:40.732 07:03:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 618679' 00:26:40.732 killing process with pid 618679 00:26:40.732 07:03:54 -- common/autotest_common.sh@945 -- # kill 618679 00:26:40.732 Received shutdown signal, test time was about 2.000000 seconds 00:26:40.732 00:26:40.732 Latency(us) 00:26:40.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.732 =================================================================================================================== 00:26:40.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:40.732 07:03:54 -- common/autotest_common.sh@950 -- # wait 618679 00:26:40.991 07:03:55 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:26:40.991 07:03:55 -- host/digest.sh@77 -- # local rw bs qd 00:26:40.991 07:03:55 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:40.991 07:03:55 -- host/digest.sh@80 -- # rw=randwrite 00:26:40.991 07:03:55 -- host/digest.sh@80 -- # bs=131072 00:26:40.991 07:03:55 -- host/digest.sh@80 -- # qd=16 00:26:40.991 07:03:55 -- host/digest.sh@82 -- # bperfpid=619103 00:26:40.991 07:03:55 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:40.991 07:03:55 -- host/digest.sh@83 -- # waitforlisten 619103 /var/tmp/bperf.sock 00:26:40.991 07:03:55 -- common/autotest_common.sh@819 -- # '[' -z 619103 ']' 00:26:40.991 07:03:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:40.991 07:03:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:40.991 07:03:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:40.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:40.991 07:03:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:40.991 07:03:55 -- common/autotest_common.sh@10 -- # set +x 00:26:40.991 [2024-05-15 07:03:55.050471] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:40.991 [2024-05-15 07:03:55.050552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619103 ] 00:26:40.991 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:40.991 Zero copy mechanism will not be used. 00:26:40.991 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.991 [2024-05-15 07:03:55.122280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.249 [2024-05-15 07:03:55.228939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.249 07:03:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:41.249 07:03:55 -- common/autotest_common.sh@852 -- # return 0 00:26:41.249 07:03:55 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:41.249 07:03:55 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:41.249 07:03:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:41.508 07:03:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:41.508 07:03:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:41.766 nvme0n1 00:26:41.766 07:03:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:41.766 07:03:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:42.023 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:42.023 Zero copy mechanism will not be used. 00:26:42.023 Running I/O for 2 seconds... 00:26:43.921 00:26:43.921 Latency(us) 00:26:43.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.921 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:43.921 nvme0n1 : 2.01 1291.07 161.38 0.00 0.00 12350.77 5218.61 16505.36 00:26:43.921 =================================================================================================================== 00:26:43.921 Total : 1291.07 161.38 0.00 0.00 12350.77 5218.61 16505.36 00:26:43.921 0 00:26:43.921 07:03:58 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:43.921 07:03:58 -- host/digest.sh@92 -- # get_accel_stats 00:26:43.921 07:03:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:43.921 07:03:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:43.921 | select(.opcode=="crc32c") 00:26:43.921 | "\(.module_name) \(.executed)"' 00:26:43.921 07:03:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:44.179 07:03:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:44.179 07:03:58 -- host/digest.sh@93 -- # exp_module=software 00:26:44.179 07:03:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:44.179 07:03:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:44.179 07:03:58 -- host/digest.sh@97 -- # killprocess 619103 00:26:44.179 07:03:58 -- common/autotest_common.sh@926 -- # '[' -z 619103 ']' 00:26:44.179 07:03:58 -- common/autotest_common.sh@930 -- # kill -0 619103 00:26:44.179 07:03:58 -- common/autotest_common.sh@931 -- # uname 00:26:44.179 07:03:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:44.179 07:03:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 619103 00:26:44.179 07:03:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:44.179 07:03:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:44.179 07:03:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 619103' 00:26:44.179 killing process with pid 619103 00:26:44.179 07:03:58 -- common/autotest_common.sh@945 -- # kill 619103 00:26:44.179 Received shutdown signal, test time was about 2.000000 seconds 00:26:44.179 00:26:44.179 Latency(us) 00:26:44.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.179 =================================================================================================================== 00:26:44.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:44.179 07:03:58 -- common/autotest_common.sh@950 -- # wait 619103 00:26:44.437 07:03:58 -- host/digest.sh@126 -- # killprocess 617627 00:26:44.437 07:03:58 -- common/autotest_common.sh@926 -- # '[' -z 617627 ']' 00:26:44.437 07:03:58 -- common/autotest_common.sh@930 -- # kill -0 617627 00:26:44.437 07:03:58 -- common/autotest_common.sh@931 -- # uname 00:26:44.437 07:03:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:44.437 07:03:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 617627 00:26:44.437 07:03:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:44.437 07:03:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:44.437 07:03:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 617627' 00:26:44.437 killing process with pid 617627 00:26:44.437 07:03:58 -- common/autotest_common.sh@945 -- # kill 617627 00:26:44.437 07:03:58 -- common/autotest_common.sh@950 -- # wait 617627 00:26:44.695 00:26:44.695 real 0m15.964s 00:26:44.695 user 0m32.027s 00:26:44.695 sys 0m3.926s 00:26:44.695 07:03:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:44.695 07:03:58 -- common/autotest_common.sh@10 -- # set +x 00:26:44.695 ************************************ 00:26:44.695 END TEST nvmf_digest_clean 00:26:44.695 ************************************ 00:26:44.695 07:03:58 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:26:44.695 07:03:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:44.695 07:03:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:44.695 07:03:58 -- common/autotest_common.sh@10 -- # set +x 00:26:44.695 ************************************ 00:26:44.695 START TEST nvmf_digest_error 00:26:44.695 ************************************ 00:26:44.695 07:03:58 -- common/autotest_common.sh@1104 -- # run_digest_error 00:26:44.695 07:03:58 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:26:44.695 07:03:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:44.695 07:03:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:44.695 07:03:58 -- common/autotest_common.sh@10 -- # set +x 00:26:44.953 07:03:58 -- nvmf/common.sh@469 -- # nvmfpid=619669 00:26:44.953 07:03:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:44.953 07:03:58 -- nvmf/common.sh@470 -- # waitforlisten 619669 00:26:44.953 07:03:58 -- common/autotest_common.sh@819 -- # '[' -z 619669 ']' 00:26:44.953 07:03:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.953 07:03:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:44.953 07:03:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.953 07:03:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:44.953 07:03:58 -- common/autotest_common.sh@10 -- # set +x 00:26:44.953 [2024-05-15 07:03:58.970239] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:44.953 [2024-05-15 07:03:58.970326] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.953 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.953 [2024-05-15 07:03:59.048499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.953 [2024-05-15 07:03:59.162360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:44.953 [2024-05-15 07:03:59.162518] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.953 [2024-05-15 07:03:59.162536] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.953 [2024-05-15 07:03:59.162548] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.953 [2024-05-15 07:03:59.162575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.211 07:03:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:45.211 07:03:59 -- common/autotest_common.sh@852 -- # return 0 00:26:45.211 07:03:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:45.211 07:03:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:45.211 07:03:59 -- common/autotest_common.sh@10 -- # set +x 00:26:45.211 07:03:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.211 07:03:59 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:45.211 07:03:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:45.211 07:03:59 -- common/autotest_common.sh@10 -- # set +x 00:26:45.211 [2024-05-15 07:03:59.223133] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:45.211 07:03:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:45.211 07:03:59 -- host/digest.sh@104 -- # common_target_config 00:26:45.211 07:03:59 -- host/digest.sh@43 -- # rpc_cmd 00:26:45.211 07:03:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:45.211 07:03:59 -- common/autotest_common.sh@10 -- # set +x 00:26:45.211 null0 00:26:45.211 [2024-05-15 07:03:59.337265] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.212 [2024-05-15 07:03:59.361499] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.212 07:03:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:45.212 07:03:59 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:26:45.212 07:03:59 -- host/digest.sh@54 -- # local rw bs qd 00:26:45.212 07:03:59 -- host/digest.sh@56 -- # rw=randread 00:26:45.212 07:03:59 -- host/digest.sh@56 -- # bs=4096 00:26:45.212 07:03:59 -- host/digest.sh@56 -- # qd=128 00:26:45.212 07:03:59 -- host/digest.sh@58 -- # bperfpid=619694 00:26:45.212 07:03:59 -- host/digest.sh@60 -- # waitforlisten 619694 /var/tmp/bperf.sock 00:26:45.212 07:03:59 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:45.212 07:03:59 -- common/autotest_common.sh@819 -- # '[' -z 619694 ']' 00:26:45.212 07:03:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.212 07:03:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:45.212 07:03:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.212 07:03:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:45.212 07:03:59 -- common/autotest_common.sh@10 -- # set +x 00:26:45.212 [2024-05-15 07:03:59.403784] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:45.212 [2024-05-15 07:03:59.403845] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619694 ] 00:26:45.212 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.470 [2024-05-15 07:03:59.475990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.470 [2024-05-15 07:03:59.590341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.404 07:04:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:46.404 07:04:00 -- common/autotest_common.sh@852 -- # return 0 00:26:46.404 07:04:00 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:46.404 07:04:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:46.661 07:04:00 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:46.662 07:04:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:46.662 07:04:00 -- common/autotest_common.sh@10 -- # set +x 00:26:46.662 07:04:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:46.662 07:04:00 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.662 07:04:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.919 nvme0n1 00:26:46.919 07:04:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:46.919 07:04:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:46.919 07:04:01 -- common/autotest_common.sh@10 -- # set +x 00:26:46.919 07:04:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:46.919 07:04:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:46.919 07:04:01 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.177 Running I/O for 2 seconds... 00:26:47.177 [2024-05-15 07:04:01.255226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.177 [2024-05-15 07:04:01.255288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.177 [2024-05-15 07:04:01.255310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.177 [2024-05-15 07:04:01.270506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.177 [2024-05-15 07:04:01.270552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.177 [2024-05-15 07:04:01.270591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.177 [2024-05-15 07:04:01.283635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.177 [2024-05-15 07:04:01.283689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.177 [2024-05-15 07:04:01.283721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.177 [2024-05-15 07:04:01.297077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.177 [2024-05-15 07:04:01.297106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.177 [2024-05-15 07:04:01.297137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.177 [2024-05-15 07:04:01.314714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.177 [2024-05-15 07:04:01.314758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.177 [2024-05-15 07:04:01.314789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.177 [2024-05-15 07:04:01.327670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.177 [2024-05-15 07:04:01.327715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.177 [2024-05-15 07:04:01.327755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.177 [2024-05-15 07:04:01.340414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.177 [2024-05-15 07:04:01.340458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.177 [2024-05-15 07:04:01.340489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.177 [2024-05-15 07:04:01.353713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.177 [2024-05-15 07:04:01.353758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.177 [2024-05-15 07:04:01.353789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.177 [2024-05-15 07:04:01.366560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.177 [2024-05-15 07:04:01.366605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.177 [2024-05-15 07:04:01.366644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.178 [2024-05-15 07:04:01.378922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.178 [2024-05-15 07:04:01.378986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.178 [2024-05-15 07:04:01.379016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.178 [2024-05-15 07:04:01.391780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.178 [2024-05-15 07:04:01.391826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.178 [2024-05-15 07:04:01.391856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.178 [2024-05-15 07:04:01.404458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.178 [2024-05-15 07:04:01.404492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.178 [2024-05-15 07:04:01.404510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.417604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.417648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.417680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.430596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.430639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.430669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.443018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.443064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.443094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.455131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.455185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.455234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.466606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.466653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.466682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.478528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.478566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.478592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.490205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.490257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.490282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.502208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.502265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.502292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.515867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.515908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.515961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.527528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.527568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.527609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.540506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.540538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.540555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.553378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.553419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.553446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.566623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.566664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.436 [2024-05-15 07:04:01.566694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.436 [2024-05-15 07:04:01.580702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.436 [2024-05-15 07:04:01.580742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.437 [2024-05-15 07:04:01.580769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.437 [2024-05-15 07:04:01.594078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.437 [2024-05-15 07:04:01.594130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.437 [2024-05-15 07:04:01.594156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.437 [2024-05-15 07:04:01.608101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.437 [2024-05-15 07:04:01.608131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.437 [2024-05-15 07:04:01.608147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.437 [2024-05-15 07:04:01.626779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.437 [2024-05-15 07:04:01.626826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.437 [2024-05-15 07:04:01.626856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.437 [2024-05-15 07:04:01.640243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.437 [2024-05-15 07:04:01.640287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.437 [2024-05-15 07:04:01.640320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.437 [2024-05-15 07:04:01.655146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.437 [2024-05-15 07:04:01.655184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.437 [2024-05-15 07:04:01.655213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.437 [2024-05-15 07:04:01.668831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.437 [2024-05-15 07:04:01.668875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.437 [2024-05-15 07:04:01.668923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.702 [2024-05-15 07:04:01.682125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.702 [2024-05-15 07:04:01.682156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.702 [2024-05-15 07:04:01.682187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.702 [2024-05-15 07:04:01.696109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.702 [2024-05-15 07:04:01.696163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.702 [2024-05-15 07:04:01.696188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.702 [2024-05-15 07:04:01.708977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.702 [2024-05-15 07:04:01.709013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.702 [2024-05-15 07:04:01.709054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.702 [2024-05-15 07:04:01.726147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.702 [2024-05-15 07:04:01.726186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.702 [2024-05-15 07:04:01.726213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.702 [2024-05-15 07:04:01.738570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.702 [2024-05-15 07:04:01.738605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.702 [2024-05-15 07:04:01.738625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.702 [2024-05-15 07:04:01.756642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.702 [2024-05-15 07:04:01.756688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.702 [2024-05-15 07:04:01.756718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.702 [2024-05-15 07:04:01.770773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.702 [2024-05-15 07:04:01.770818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.702 [2024-05-15 07:04:01.770848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.785179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.785219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.785263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.797289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.797342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.797375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.808942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.808988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.809004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.822988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.823020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.823037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.836245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.836301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.836331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.848536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.848585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.848616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.863827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.863872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.863902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.876383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.876424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.876451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.890654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.890694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.890724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.903299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.903340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.903374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.915738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.915780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.915809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.703 [2024-05-15 07:04:01.927268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.703 [2024-05-15 07:04:01.927309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.703 [2024-05-15 07:04:01.927338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:01.938200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:01.938241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:01.938268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:01.951837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:01.951878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:01.951905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:01.963361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:01.963415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:01.963443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:01.975211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:01.975264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:01.975290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:01.986190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:01.986229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:01.986261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:01.998824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:01.998856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:01.998873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.011861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.011909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.011955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.023592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.023632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.023659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.035729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.035769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.035798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.047625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.047663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.047690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.059559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.059591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.059608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.071093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.071122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.071156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.082691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.082724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.082741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.099489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.099528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.099555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.115947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.116001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.116042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.132523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.132568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.132599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.152635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.152679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.152709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.167263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.167308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.167338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.179976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.180016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.180043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.192694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.192739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.192770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.206283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.206327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.987 [2024-05-15 07:04:02.206357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.987 [2024-05-15 07:04:02.218637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:47.987 [2024-05-15 07:04:02.218677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.988 [2024-05-15 07:04:02.218705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.245 [2024-05-15 07:04:02.229719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.245 [2024-05-15 07:04:02.229759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.245 [2024-05-15 07:04:02.229786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.245 [2024-05-15 07:04:02.242759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.245 [2024-05-15 07:04:02.242798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.245 [2024-05-15 07:04:02.242833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.245 [2024-05-15 07:04:02.254011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.245 [2024-05-15 07:04:02.254052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.245 [2024-05-15 07:04:02.254082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.245 [2024-05-15 07:04:02.265270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.245 [2024-05-15 07:04:02.265315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.245 [2024-05-15 07:04:02.265343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.245 [2024-05-15 07:04:02.277031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.245 [2024-05-15 07:04:02.277085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.245 [2024-05-15 07:04:02.277112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.245 [2024-05-15 07:04:02.289033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.245 [2024-05-15 07:04:02.289072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.245 [2024-05-15 07:04:02.289100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.245 [2024-05-15 07:04:02.300904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.245 [2024-05-15 07:04:02.300987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.245 [2024-05-15 07:04:02.301015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.245 [2024-05-15 07:04:02.313502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.245 [2024-05-15 07:04:02.313541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.245 [2024-05-15 07:04:02.313568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.245 [2024-05-15 07:04:02.325831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.245 [2024-05-15 07:04:02.325862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.245 [2024-05-15 07:04:02.325879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.245 [2024-05-15 07:04:02.341626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.245 [2024-05-15 07:04:02.341663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.245 [2024-05-15 07:04:02.341682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.245 [2024-05-15 07:04:02.362234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.245 [2024-05-15 07:04:02.362286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.246 [2024-05-15 07:04:02.362317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.246 [2024-05-15 07:04:02.376332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.246 [2024-05-15 07:04:02.376368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.246 [2024-05-15 07:04:02.376387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.246 [2024-05-15 07:04:02.396533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.246 [2024-05-15 07:04:02.396570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.246 [2024-05-15 07:04:02.396589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.246 [2024-05-15 07:04:02.417800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.246 [2024-05-15 07:04:02.417844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.246 [2024-05-15 07:04:02.417875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.246 [2024-05-15 07:04:02.437526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.246 [2024-05-15 07:04:02.437571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.246 [2024-05-15 07:04:02.437601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.246 [2024-05-15 07:04:02.451105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.246 [2024-05-15 07:04:02.451144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.246 [2024-05-15 07:04:02.451172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.246 [2024-05-15 07:04:02.470065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.246 [2024-05-15 07:04:02.470101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.246 [2024-05-15 07:04:02.470119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.483091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.483132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.483161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.495948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.496000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.496040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.508869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.508905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.508925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.529139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.529179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.529222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.543095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.543149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.543170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.564506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.564543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.564563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.578098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.578137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.578165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.591359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.591403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.591434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.604206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.604261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.604291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.617485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.617521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.617540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.629336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.629394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.629415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.643094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.643134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.643162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.656713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.656757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.656787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.670014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.670053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.670080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.682320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.682363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.682396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.694824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.694866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.694894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.708441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.708496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.708525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.720470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.720511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.720540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.504 [2024-05-15 07:04:02.734476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.504 [2024-05-15 07:04:02.734517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.504 [2024-05-15 07:04:02.734547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.762 [2024-05-15 07:04:02.746646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.762 [2024-05-15 07:04:02.746687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-05-15 07:04:02.746715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.762 [2024-05-15 07:04:02.758409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.762 [2024-05-15 07:04:02.758450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-05-15 07:04:02.758479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.762 [2024-05-15 07:04:02.771625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.762 [2024-05-15 07:04:02.771664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-05-15 07:04:02.771692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.762 [2024-05-15 07:04:02.783812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.762 [2024-05-15 07:04:02.783854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-05-15 07:04:02.783882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.762 [2024-05-15 07:04:02.798634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.762 [2024-05-15 07:04:02.798681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-05-15 07:04:02.798710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.762 [2024-05-15 07:04:02.810601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.762 [2024-05-15 07:04:02.810641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-05-15 07:04:02.810670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.762 [2024-05-15 07:04:02.825396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.762 [2024-05-15 07:04:02.825439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-05-15 07:04:02.825467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.762 [2024-05-15 07:04:02.837706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.762 [2024-05-15 07:04:02.837749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-05-15 07:04:02.837779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.762 [2024-05-15 07:04:02.850443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.762 [2024-05-15 07:04:02.850485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-05-15 07:04:02.850526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.762 [2024-05-15 07:04:02.863138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.762 [2024-05-15 07:04:02.863176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.762 [2024-05-15 07:04:02.863203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.762 [2024-05-15 07:04:02.874833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.762 [2024-05-15 07:04:02.874873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.763 [2024-05-15 07:04:02.874901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.763 [2024-05-15 07:04:02.887407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.763 [2024-05-15 07:04:02.887447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.763 [2024-05-15 07:04:02.887475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.763 [2024-05-15 07:04:02.898915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.763 [2024-05-15 07:04:02.898978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.763 [2024-05-15 07:04:02.899005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.763 [2024-05-15 07:04:02.910155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.763 [2024-05-15 07:04:02.910184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.763 [2024-05-15 07:04:02.910215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.763 [2024-05-15 07:04:02.925765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.763 [2024-05-15 07:04:02.925805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.763 [2024-05-15 07:04:02.925832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.763 [2024-05-15 07:04:02.937223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.763 [2024-05-15 07:04:02.937271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.763 [2024-05-15 07:04:02.937288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.763 [2024-05-15 07:04:02.948895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.763 [2024-05-15 07:04:02.948943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.763 [2024-05-15 07:04:02.948971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.763 [2024-05-15 07:04:02.961806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.763 [2024-05-15 07:04:02.961846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.763 [2024-05-15 07:04:02.961864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.763 [2024-05-15 07:04:02.974333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.763 [2024-05-15 07:04:02.974373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.763 [2024-05-15 07:04:02.974402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.763 [2024-05-15 07:04:02.988059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:48.763 [2024-05-15 07:04:02.988099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.763 [2024-05-15 07:04:02.988126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-05-15 07:04:03.000288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.021 [2024-05-15 07:04:03.000328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-05-15 07:04:03.000356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-05-15 07:04:03.012011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.021 [2024-05-15 07:04:03.012049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-05-15 07:04:03.012077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-05-15 07:04:03.023021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.021 [2024-05-15 07:04:03.023063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-05-15 07:04:03.023091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-05-15 07:04:03.034628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.021 [2024-05-15 07:04:03.034668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-05-15 07:04:03.034695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-05-15 07:04:03.046136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.021 [2024-05-15 07:04:03.046175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-05-15 07:04:03.046203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-05-15 07:04:03.058356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.021 [2024-05-15 07:04:03.058389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.021 [2024-05-15 07:04:03.058407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.021 [2024-05-15 07:04:03.069840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.021 [2024-05-15 07:04:03.069879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.069906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.083432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.083471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.083498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.094860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.094899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.094950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.111049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.111090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.111117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.123265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.123305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.123336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.134865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.134905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.134943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.146361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.146400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.146427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.157593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.157632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.157659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.168928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.168990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.169027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.182289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.182329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.182356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.193072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.193113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.193141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.205519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.205557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.205584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.216538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.216577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.216604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 [2024-05-15 07:04:03.228358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1f00) 00:26:49.022 [2024-05-15 07:04:03.228409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:49.022 [2024-05-15 07:04:03.228434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:49.022 00:26:49.022 Latency(us) 00:26:49.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.022 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:49.022 nvme0n1 : 2.00 19074.97 74.51 0.00 0.00 6700.77 3398.16 21845.33 00:26:49.022 =================================================================================================================== 00:26:49.022 Total : 19074.97 74.51 0.00 0.00 6700.77 3398.16 21845.33 00:26:49.022 0 00:26:49.022 07:04:03 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:49.022 07:04:03 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:49.022 07:04:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:49.022 07:04:03 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:49.022 | .driver_specific 00:26:49.022 | .nvme_error 00:26:49.022 | .status_code 00:26:49.022 | .command_transient_transport_error' 00:26:49.280 07:04:03 -- host/digest.sh@71 -- # (( 149 > 0 )) 00:26:49.280 07:04:03 -- host/digest.sh@73 -- # killprocess 619694 00:26:49.280 07:04:03 -- common/autotest_common.sh@926 -- # '[' -z 619694 ']' 00:26:49.280 07:04:03 -- common/autotest_common.sh@930 -- # kill -0 619694 00:26:49.280 07:04:03 -- common/autotest_common.sh@931 -- # uname 00:26:49.280 07:04:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:49.280 07:04:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 619694 00:26:49.539 07:04:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:49.539 07:04:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:49.539 07:04:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 619694' 00:26:49.539 killing process with pid 619694 00:26:49.539 07:04:03 -- common/autotest_common.sh@945 -- # kill 619694 00:26:49.539 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.539 00:26:49.539 Latency(us) 00:26:49.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.539 =================================================================================================================== 00:26:49.539 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.539 07:04:03 -- common/autotest_common.sh@950 -- # wait 619694 00:26:49.797 07:04:03 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:26:49.797 07:04:03 -- host/digest.sh@54 -- # local rw bs qd 00:26:49.797 07:04:03 -- host/digest.sh@56 -- # rw=randread 00:26:49.797 07:04:03 -- host/digest.sh@56 -- # bs=131072 00:26:49.797 07:04:03 -- host/digest.sh@56 -- # qd=16 00:26:49.797 07:04:03 -- host/digest.sh@58 -- # bperfpid=620251 00:26:49.797 07:04:03 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:49.797 07:04:03 -- host/digest.sh@60 -- # waitforlisten 620251 /var/tmp/bperf.sock 00:26:49.797 07:04:03 -- common/autotest_common.sh@819 -- # '[' -z 620251 ']' 00:26:49.797 07:04:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.797 07:04:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:49.797 07:04:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.797 07:04:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:49.798 07:04:03 -- common/autotest_common.sh@10 -- # set +x 00:26:49.798 [2024-05-15 07:04:03.839685] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:49.798 [2024-05-15 07:04:03.839773] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620251 ] 00:26:49.798 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.798 Zero copy mechanism will not be used. 00:26:49.798 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.798 [2024-05-15 07:04:03.908508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.798 [2024-05-15 07:04:04.009922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.732 07:04:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:50.732 07:04:04 -- common/autotest_common.sh@852 -- # return 0 00:26:50.732 07:04:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:50.732 07:04:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:50.991 07:04:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:50.991 07:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.991 07:04:05 -- common/autotest_common.sh@10 -- # set +x 00:26:50.991 07:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.991 07:04:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.991 07:04:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:51.248 nvme0n1 00:26:51.248 07:04:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:51.248 07:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:51.248 07:04:05 -- common/autotest_common.sh@10 -- # set +x 00:26:51.248 07:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:51.248 07:04:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:51.248 07:04:05 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:51.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:51.248 Zero copy mechanism will not be used. 00:26:51.248 Running I/O for 2 seconds... 00:26:51.506 [2024-05-15 07:04:05.496977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.497041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.497061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.512867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.512904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.512923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.529074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.529104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.529135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.545384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.545419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.545438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.561531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.561567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.561586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.577520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.577554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.577575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.593278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.593324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.593344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.609141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.609171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.609209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.624860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.624893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.624912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.640769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.640803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.640822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.656516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.656545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.656578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.672302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.672336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.672355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.688214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.688257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.688277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.704233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.704279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.704298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.720002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.720030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.720062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.506 [2024-05-15 07:04:05.735589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.506 [2024-05-15 07:04:05.735623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.506 [2024-05-15 07:04:05.735642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.751498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.764 [2024-05-15 07:04:05.751537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.764 [2024-05-15 07:04:05.751557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.767224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.764 [2024-05-15 07:04:05.767252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.764 [2024-05-15 07:04:05.767285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.783175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.764 [2024-05-15 07:04:05.783204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.764 [2024-05-15 07:04:05.783220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.798992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.764 [2024-05-15 07:04:05.799021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.764 [2024-05-15 07:04:05.799053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.814736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.764 [2024-05-15 07:04:05.814769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.764 [2024-05-15 07:04:05.814789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.830427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.764 [2024-05-15 07:04:05.830460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.764 [2024-05-15 07:04:05.830479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.846263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.764 [2024-05-15 07:04:05.846310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.764 [2024-05-15 07:04:05.846329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.862152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.764 [2024-05-15 07:04:05.862180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.764 [2024-05-15 07:04:05.862211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.877872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.764 [2024-05-15 07:04:05.877905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.764 [2024-05-15 07:04:05.877923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.893553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.764 [2024-05-15 07:04:05.893585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.764 [2024-05-15 07:04:05.893603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.909278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.764 [2024-05-15 07:04:05.909323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.764 [2024-05-15 07:04:05.909341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.764 [2024-05-15 07:04:05.925302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.765 [2024-05-15 07:04:05.925336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.765 [2024-05-15 07:04:05.925355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:51.765 [2024-05-15 07:04:05.941106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.765 [2024-05-15 07:04:05.941133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.765 [2024-05-15 07:04:05.941149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:51.765 [2024-05-15 07:04:05.956804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.765 [2024-05-15 07:04:05.956836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.765 [2024-05-15 07:04:05.956856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:51.765 [2024-05-15 07:04:05.972510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.765 [2024-05-15 07:04:05.972543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.765 [2024-05-15 07:04:05.972564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:51.765 [2024-05-15 07:04:05.988515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:51.765 [2024-05-15 07:04:05.988548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.765 [2024-05-15 07:04:05.988568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.004291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.004326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.004345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.020221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.020249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.020273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.036011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.036054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.036071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.051709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.051742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.051761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.067443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.067475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.067494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.083194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.083226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.083258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.099055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.099084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.099101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.114800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.114840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.114859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.130837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.130871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.130890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.146618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.146651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.146670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.162522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.162556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.162575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.178245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.178290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.178309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.194234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.194262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.194295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.209992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.210019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.210050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.225739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.225771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.225790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.241382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.241414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.241433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.023 [2024-05-15 07:04:06.257235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.023 [2024-05-15 07:04:06.257283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.023 [2024-05-15 07:04:06.257301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.280 [2024-05-15 07:04:06.273156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.280 [2024-05-15 07:04:06.273184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.280 [2024-05-15 07:04:06.273219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.288917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.288959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.288984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.304655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.304687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.304706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.320392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.320424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.320443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.336249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.336281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.336299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.352355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.352390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.352409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.368010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.368038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.368069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.383717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.383749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.383767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.399393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.399426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.399445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.415350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.415382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.415400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.431316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.431355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.431374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.447048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.447075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.447091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.462890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.462923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.462953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.478559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.478592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.478610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.494305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.494338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.494356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.281 [2024-05-15 07:04:06.510085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.281 [2024-05-15 07:04:06.510128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.281 [2024-05-15 07:04:06.510143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.525858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.525890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.525909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.541510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.541541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.541560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.557273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.557305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.557323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.573047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.573075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.573107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.589050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.589077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.589107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.604718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.604750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.604769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.620427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.620459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.620478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.636326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.636359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.636377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.652047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.652075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.652090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.667879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.667913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.667942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.682591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.682625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.682644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.698385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.698418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.698443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.714237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.714280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.714296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.731131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.731160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.731191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.747487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.747521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.747541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.538 [2024-05-15 07:04:06.764653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.538 [2024-05-15 07:04:06.764701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.538 [2024-05-15 07:04:06.764721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.780322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.780356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.780375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.796807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.796840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.796859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.814058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.814087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.814119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.831000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.831030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.831061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.847158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.847201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.847218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.864613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.864646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.864665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.880378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.880409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.880428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.896278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.896325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.896344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.913008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.913036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.913069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.929376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.929409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.929428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.945375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.945406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.945425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.961155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.961183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.961219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.976950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.976995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.977016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:06.992613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:06.992646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:06.992665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:07.008445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:07.008478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:07.008497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:52.796 [2024-05-15 07:04:07.024309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:52.796 [2024-05-15 07:04:07.024341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.796 [2024-05-15 07:04:07.024360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.040184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.040231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.040250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.055989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.056017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.056033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.071612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.071644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.071663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.087433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.087465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.087484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.103319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.103351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.103370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.119009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.119043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.119076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.134730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.134761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.134780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.150430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.150461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.150479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.166449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.166482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.166501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.182295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.182327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.182347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.198257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.198290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.198309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.213993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.214021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.214037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.229697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.229729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.229748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.245442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.245474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.245493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.261375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.261409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.261428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.054 [2024-05-15 07:04:07.277263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.054 [2024-05-15 07:04:07.277308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.054 [2024-05-15 07:04:07.277327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.292989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.293018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.293050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.308745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.308777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.308795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.324396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.324427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.324446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.340291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.340325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.340343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.356451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.356480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.356512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.371686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.371718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.371736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.387364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.387396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.387421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.403253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.403296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.403311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.419202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.419230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.419245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.435057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.435086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.435116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.450705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.450737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.450755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.466385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.466417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.466435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:53.314 [2024-05-15 07:04:07.482025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1650880) 00:26:53.314 [2024-05-15 07:04:07.482053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:53.314 [2024-05-15 07:04:07.482084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:53.314 00:26:53.314 Latency(us) 00:26:53.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.314 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:53.314 nvme0n1 : 2.00 1949.97 243.75 0.00 0.00 8196.54 7184.69 17670.45 00:26:53.314 =================================================================================================================== 00:26:53.314 Total : 1949.97 243.75 0.00 0.00 8196.54 7184.69 17670.45 00:26:53.314 0 00:26:53.314 07:04:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:53.314 07:04:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:53.314 07:04:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:53.314 07:04:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:53.314 | .driver_specific 00:26:53.314 | .nvme_error 00:26:53.314 | .status_code 00:26:53.314 | .command_transient_transport_error' 00:26:53.572 07:04:07 -- host/digest.sh@71 -- # (( 126 > 0 )) 00:26:53.572 07:04:07 -- host/digest.sh@73 -- # killprocess 620251 00:26:53.572 07:04:07 -- common/autotest_common.sh@926 -- # '[' -z 620251 ']' 00:26:53.572 07:04:07 -- common/autotest_common.sh@930 -- # kill -0 620251 00:26:53.572 07:04:07 -- common/autotest_common.sh@931 -- # uname 00:26:53.572 07:04:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:53.572 07:04:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 620251 00:26:53.572 07:04:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:53.572 07:04:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:53.572 07:04:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 620251' 00:26:53.572 killing process with pid 620251 00:26:53.572 07:04:07 -- common/autotest_common.sh@945 -- # kill 620251 00:26:53.572 Received shutdown signal, test time was about 2.000000 seconds 00:26:53.572 00:26:53.572 Latency(us) 00:26:53.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:53.572 =================================================================================================================== 00:26:53.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:53.572 07:04:07 -- common/autotest_common.sh@950 -- # wait 620251 00:26:53.830 07:04:08 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:26:53.830 07:04:08 -- host/digest.sh@54 -- # local rw bs qd 00:26:53.830 07:04:08 -- host/digest.sh@56 -- # rw=randwrite 00:26:53.830 07:04:08 -- host/digest.sh@56 -- # bs=4096 00:26:53.830 07:04:08 -- host/digest.sh@56 -- # qd=128 00:26:53.830 07:04:08 -- host/digest.sh@58 -- # bperfpid=620801 00:26:53.830 07:04:08 -- host/digest.sh@60 -- # waitforlisten 620801 /var/tmp/bperf.sock 00:26:53.830 07:04:08 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:53.830 07:04:08 -- common/autotest_common.sh@819 -- # '[' -z 620801 ']' 00:26:53.830 07:04:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:53.830 07:04:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:53.830 07:04:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:53.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:53.830 07:04:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:53.830 07:04:08 -- common/autotest_common.sh@10 -- # set +x 00:26:53.830 [2024-05-15 07:04:08.050804] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:53.830 [2024-05-15 07:04:08.050887] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620801 ] 00:26:54.089 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.089 [2024-05-15 07:04:08.122417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.089 [2024-05-15 07:04:08.227675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.021 07:04:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:55.021 07:04:09 -- common/autotest_common.sh@852 -- # return 0 00:26:55.021 07:04:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:55.021 07:04:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:55.320 07:04:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:55.320 07:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.320 07:04:09 -- common/autotest_common.sh@10 -- # set +x 00:26:55.320 07:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.320 07:04:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.320 07:04:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:55.597 nvme0n1 00:26:55.597 07:04:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:55.597 07:04:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.597 07:04:09 -- common/autotest_common.sh@10 -- # set +x 00:26:55.856 07:04:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.857 07:04:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:55.857 07:04:09 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:55.857 Running I/O for 2 seconds... 00:26:55.857 [2024-05-15 07:04:09.957759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f6cc8 00:26:55.857 [2024-05-15 07:04:09.958736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:09.958779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:55.857 [2024-05-15 07:04:09.969763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f31b8 00:26:55.857 [2024-05-15 07:04:09.970224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:09.970255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:55.857 [2024-05-15 07:04:09.981880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:55.857 [2024-05-15 07:04:09.983013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:09.983044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:55.857 [2024-05-15 07:04:09.993904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:55.857 [2024-05-15 07:04:09.995101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:09.995131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:55.857 [2024-05-15 07:04:10.005833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:55.857 [2024-05-15 07:04:10.006917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:10.006957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:55.857 [2024-05-15 07:04:10.017495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:55.857 [2024-05-15 07:04:10.018619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:10.018653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:55.857 [2024-05-15 07:04:10.029527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:55.857 [2024-05-15 07:04:10.030631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:10.030665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:55.857 [2024-05-15 07:04:10.041261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:55.857 [2024-05-15 07:04:10.042501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:10.042533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:55.857 [2024-05-15 07:04:10.053236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:55.857 [2024-05-15 07:04:10.054519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:10.054563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:55.857 [2024-05-15 07:04:10.065405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:55.857 [2024-05-15 07:04:10.066648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:10.066678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:55.857 [2024-05-15 07:04:10.077673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:55.857 [2024-05-15 07:04:10.078903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:10.078959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:55.857 [2024-05-15 07:04:10.089841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:55.857 [2024-05-15 07:04:10.091113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.857 [2024-05-15 07:04:10.091143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.101906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:56.116 [2024-05-15 07:04:10.103205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.103233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.113995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:56.116 [2024-05-15 07:04:10.115336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.115379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.125980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:56.116 [2024-05-15 07:04:10.127309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.127337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.137846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:56.116 [2024-05-15 07:04:10.139165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.139202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.149751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:56.116 [2024-05-15 07:04:10.151098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.151129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.161757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:56.116 [2024-05-15 07:04:10.163278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.163307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.174515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:56.116 [2024-05-15 07:04:10.176164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.176212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.187318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0bc0 00:26:56.116 [2024-05-15 07:04:10.188838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.188872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.200131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4578 00:26:56.116 [2024-05-15 07:04:10.201694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.201726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.212777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eaab8 00:26:56.116 [2024-05-15 07:04:10.214320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.214352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.225452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e3d08 00:26:56.116 [2024-05-15 07:04:10.227196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.227223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.238070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e5658 00:26:56.116 [2024-05-15 07:04:10.239831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.239858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.250676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f2510 00:26:56.116 [2024-05-15 07:04:10.252367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.252404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.263140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eea00 00:26:56.116 [2024-05-15 07:04:10.264927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.264967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.275563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e6738 00:26:56.116 [2024-05-15 07:04:10.277071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.277104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.288158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ee190 00:26:56.116 [2024-05-15 07:04:10.290395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.290426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.299531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e5ec8 00:26:56.116 [2024-05-15 07:04:10.300758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.300789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.312017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ef6a8 00:26:56.116 [2024-05-15 07:04:10.313266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.313298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.325097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f1ca0 00:26:56.116 [2024-05-15 07:04:10.326359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.326391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.337692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e73e0 00:26:56.116 [2024-05-15 07:04:10.339045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.116 [2024-05-15 07:04:10.339072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:56.116 [2024-05-15 07:04:10.350316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eaab8 00:26:56.375 [2024-05-15 07:04:10.351651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.375 [2024-05-15 07:04:10.351683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:56.375 [2024-05-15 07:04:10.362763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eaab8 00:26:56.375 [2024-05-15 07:04:10.364135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.375 [2024-05-15 07:04:10.364163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:56.375 [2024-05-15 07:04:10.375324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eaab8 00:26:56.375 [2024-05-15 07:04:10.376722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.375 [2024-05-15 07:04:10.376767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:56.375 [2024-05-15 07:04:10.388613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f1ca0 00:26:56.375 [2024-05-15 07:04:10.390146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.375 [2024-05-15 07:04:10.390178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:56.375 [2024-05-15 07:04:10.401424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.375 [2024-05-15 07:04:10.402551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.375 [2024-05-15 07:04:10.402588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:56.375 [2024-05-15 07:04:10.414118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.375 [2024-05-15 07:04:10.415523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.375 [2024-05-15 07:04:10.415556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:56.375 [2024-05-15 07:04:10.426711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.375 [2024-05-15 07:04:10.428152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.375 [2024-05-15 07:04:10.428181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:56.375 [2024-05-15 07:04:10.439408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.375 [2024-05-15 07:04:10.440797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.375 [2024-05-15 07:04:10.440829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:56.375 [2024-05-15 07:04:10.451901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.375 [2024-05-15 07:04:10.453372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.375 [2024-05-15 07:04:10.453404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:56.375 [2024-05-15 07:04:10.464584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.375 [2024-05-15 07:04:10.466066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.375 [2024-05-15 07:04:10.466096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:56.375 [2024-05-15 07:04:10.477381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.375 [2024-05-15 07:04:10.478876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.375 [2024-05-15 07:04:10.478908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:56.375 [2024-05-15 07:04:10.490120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.376 [2024-05-15 07:04:10.491602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.376 [2024-05-15 07:04:10.491634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:56.376 [2024-05-15 07:04:10.502733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.376 [2024-05-15 07:04:10.504230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.376 [2024-05-15 07:04:10.504257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:56.376 [2024-05-15 07:04:10.515312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.376 [2024-05-15 07:04:10.516797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.376 [2024-05-15 07:04:10.516829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:56.376 [2024-05-15 07:04:10.527824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.376 [2024-05-15 07:04:10.529338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.376 [2024-05-15 07:04:10.529370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:56.376 [2024-05-15 07:04:10.540432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.376 [2024-05-15 07:04:10.541955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.376 [2024-05-15 07:04:10.542000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:56.376 [2024-05-15 07:04:10.552985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f2d80 00:26:56.376 [2024-05-15 07:04:10.554512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.376 [2024-05-15 07:04:10.554544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:56.376 [2024-05-15 07:04:10.565594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f2d80 00:26:56.376 [2024-05-15 07:04:10.567138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.376 [2024-05-15 07:04:10.567166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.376 [2024-05-15 07:04:10.578162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.376 [2024-05-15 07:04:10.579752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.376 [2024-05-15 07:04:10.579789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.376 [2024-05-15 07:04:10.589155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ec408 00:26:56.376 [2024-05-15 07:04:10.590268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.376 [2024-05-15 07:04:10.590310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:56.376 [2024-05-15 07:04:10.601774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e38d0 00:26:56.376 [2024-05-15 07:04:10.602837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.376 [2024-05-15 07:04:10.602868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.614422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.615549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.615581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.627047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.628164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.628191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.639663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.640762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.640794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.652218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.653370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.653402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.664851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.665991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.666019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.677518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.678682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.678714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.690114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.691322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.691355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.702737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.703902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.703941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.715330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.716531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.716564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.727988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.729173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.729202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.740564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.741776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.741808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.753145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.754502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.754534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.765855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.767106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.767132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.778284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.779532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.779565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.790807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.792054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.792081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.803296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.804549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.804581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.817179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.818638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.818669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.829686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e4de8 00:26:56.635 [2024-05-15 07:04:10.831140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.831169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.840846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e6300 00:26:56.635 [2024-05-15 07:04:10.842119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.842146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.853260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e6300 00:26:56.635 [2024-05-15 07:04:10.854556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.854588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:56.635 [2024-05-15 07:04:10.865772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e6300 00:26:56.635 [2024-05-15 07:04:10.867083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.635 [2024-05-15 07:04:10.867113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:56.894 [2024-05-15 07:04:10.879646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e6300 00:26:56.894 [2024-05-15 07:04:10.881220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.894 [2024-05-15 07:04:10.881252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:56.894 [2024-05-15 07:04:10.890740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f4298 00:26:56.894 [2024-05-15 07:04:10.892106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.894 [2024-05-15 07:04:10.892134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:56.894 [2024-05-15 07:04:10.903292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f4298 00:26:56.894 [2024-05-15 07:04:10.904647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.894 [2024-05-15 07:04:10.904685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:56.894 [2024-05-15 07:04:10.917163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e38d0 00:26:56.894 [2024-05-15 07:04:10.918405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.894 [2024-05-15 07:04:10.918438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.894 [2024-05-15 07:04:10.929579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eea00 00:26:56.894 [2024-05-15 07:04:10.930773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.894 [2024-05-15 07:04:10.930804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:56.894 [2024-05-15 07:04:10.942012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f81e0 00:26:56.894 [2024-05-15 07:04:10.943355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.894 [2024-05-15 07:04:10.943387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:56.894 [2024-05-15 07:04:10.954508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e38d0 00:26:56.894 [2024-05-15 07:04:10.955801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.894 [2024-05-15 07:04:10.955833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:56.894 [2024-05-15 07:04:10.967011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f7da8 00:26:56.894 [2024-05-15 07:04:10.968242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.894 [2024-05-15 07:04:10.968274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:56.894 [2024-05-15 07:04:10.979563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f2948 00:26:56.894 [2024-05-15 07:04:10.980736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:10.980773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:10.992006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ed4e8 00:26:56.895 [2024-05-15 07:04:10.993162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:10.993190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:11.003690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f9f68 00:26:56.895 [2024-05-15 07:04:11.005659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:11.005690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:11.016044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e5a90 00:26:56.895 [2024-05-15 07:04:11.018142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:11.018176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:11.028617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ed0b0 00:26:56.895 [2024-05-15 07:04:11.030836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:11.030868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:11.040255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f46d0 00:26:56.895 [2024-05-15 07:04:11.040508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:11.040538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:11.054217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eaab8 00:26:56.895 [2024-05-15 07:04:11.055473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:11.055513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:11.066824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eff18 00:26:56.895 [2024-05-15 07:04:11.068393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:11.068425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:11.079298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e8088 00:26:56.895 [2024-05-15 07:04:11.080868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:11.080901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:11.091703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eff18 00:26:56.895 [2024-05-15 07:04:11.093292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:11.093328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:11.104362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f6cc8 00:26:56.895 [2024-05-15 07:04:11.105182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:11.105209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:11.115514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ed0b0 00:26:56.895 [2024-05-15 07:04:11.116834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:56.895 [2024-05-15 07:04:11.116866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:56.895 [2024-05-15 07:04:11.127866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f20d8 00:26:57.154 [2024-05-15 07:04:11.129275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.129307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.140388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e5a90 00:26:57.154 [2024-05-15 07:04:11.141635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.141667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.152840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f2948 00:26:57.154 [2024-05-15 07:04:11.154320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.154353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.167052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e49b0 00:26:57.154 [2024-05-15 07:04:11.168367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.168396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.179196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190fb480 00:26:57.154 [2024-05-15 07:04:11.180571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.180603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.191657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f1ca0 00:26:57.154 [2024-05-15 07:04:11.192998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.193027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.203911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e5220 00:26:57.154 [2024-05-15 07:04:11.205226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.205269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.216395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e9168 00:26:57.154 [2024-05-15 07:04:11.217655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.217688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.228871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190efae0 00:26:57.154 [2024-05-15 07:04:11.230083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.230112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.241259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0788 00:26:57.154 [2024-05-15 07:04:11.242588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.242625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.253705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f2948 00:26:57.154 [2024-05-15 07:04:11.254670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.254702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.266071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f31b8 00:26:57.154 [2024-05-15 07:04:11.267029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.267073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.278484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e9168 00:26:57.154 [2024-05-15 07:04:11.279321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.279359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.290888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eee38 00:26:57.154 [2024-05-15 07:04:11.291949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.291981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.303121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190fc128 00:26:57.154 [2024-05-15 07:04:11.304103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.304132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:57.154 [2024-05-15 07:04:11.315479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e6b70 00:26:57.154 [2024-05-15 07:04:11.316384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.154 [2024-05-15 07:04:11.316419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:57.155 [2024-05-15 07:04:11.327830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190fbcf0 00:26:57.155 [2024-05-15 07:04:11.328809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.155 [2024-05-15 07:04:11.328846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:57.155 [2024-05-15 07:04:11.340077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ef6a8 00:26:57.155 [2024-05-15 07:04:11.341080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.155 [2024-05-15 07:04:11.341114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:57.155 [2024-05-15 07:04:11.352423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eaab8 00:26:57.155 [2024-05-15 07:04:11.353503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.155 [2024-05-15 07:04:11.353535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:57.155 [2024-05-15 07:04:11.364676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e7818 00:26:57.155 [2024-05-15 07:04:11.366611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.155 [2024-05-15 07:04:11.366643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:57.155 [2024-05-15 07:04:11.377051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f57b0 00:26:57.155 [2024-05-15 07:04:11.378541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.155 [2024-05-15 07:04:11.378573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:57.413 [2024-05-15 07:04:11.389559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f6890 00:26:57.413 [2024-05-15 07:04:11.391075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.413 [2024-05-15 07:04:11.391103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:57.413 [2024-05-15 07:04:11.402048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eb760 00:26:57.413 [2024-05-15 07:04:11.403569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.413 [2024-05-15 07:04:11.403602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:57.413 [2024-05-15 07:04:11.414531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f31b8 00:26:57.413 [2024-05-15 07:04:11.416064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.413 [2024-05-15 07:04:11.416091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:57.413 [2024-05-15 07:04:11.426928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f96f8 00:26:57.413 [2024-05-15 07:04:11.428481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.413 [2024-05-15 07:04:11.428513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:57.413 [2024-05-15 07:04:11.439390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ecc78 00:26:57.413 [2024-05-15 07:04:11.440959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.413 [2024-05-15 07:04:11.440990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:57.413 [2024-05-15 07:04:11.451791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eaab8 00:26:57.413 [2024-05-15 07:04:11.453412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.413 [2024-05-15 07:04:11.453443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:57.413 [2024-05-15 07:04:11.464207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f7538 00:26:57.413 [2024-05-15 07:04:11.465807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.413 [2024-05-15 07:04:11.465838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:57.413 [2024-05-15 07:04:11.476718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e7818 00:26:57.413 [2024-05-15 07:04:11.478074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.413 [2024-05-15 07:04:11.478103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:57.413 [2024-05-15 07:04:11.489351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f4b08 00:26:57.413 [2024-05-15 07:04:11.490518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.413 [2024-05-15 07:04:11.490549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:57.413 [2024-05-15 07:04:11.501823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f4b08 00:26:57.413 [2024-05-15 07:04:11.503261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.413 [2024-05-15 07:04:11.503293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:57.413 [2024-05-15 07:04:11.514386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f4b08 00:26:57.413 [2024-05-15 07:04:11.515938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.414 [2024-05-15 07:04:11.515982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:57.414 [2024-05-15 07:04:11.526883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f4b08 00:26:57.414 [2024-05-15 07:04:11.528399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.414 [2024-05-15 07:04:11.528431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:57.414 [2024-05-15 07:04:11.539404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e95a0 00:26:57.414 [2024-05-15 07:04:11.541011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.414 [2024-05-15 07:04:11.541037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:57.414 [2024-05-15 07:04:11.551905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f7da8 00:26:57.414 [2024-05-15 07:04:11.553702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.414 [2024-05-15 07:04:11.553734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:57.414 [2024-05-15 07:04:11.564336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e5ec8 00:26:57.414 [2024-05-15 07:04:11.566098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.414 [2024-05-15 07:04:11.566125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.414 [2024-05-15 07:04:11.576701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eb760 00:26:57.414 [2024-05-15 07:04:11.578570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.414 [2024-05-15 07:04:11.578602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:57.414 [2024-05-15 07:04:11.588023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f7100 00:26:57.414 [2024-05-15 07:04:11.589227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.414 [2024-05-15 07:04:11.589271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:57.414 [2024-05-15 07:04:11.600421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eb328 00:26:57.414 [2024-05-15 07:04:11.601664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.414 [2024-05-15 07:04:11.601695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:57.414 [2024-05-15 07:04:11.612796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e5ec8 00:26:57.414 [2024-05-15 07:04:11.614029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.414 [2024-05-15 07:04:11.614056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:57.414 [2024-05-15 07:04:11.625190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0788 00:26:57.414 [2024-05-15 07:04:11.626439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.414 [2024-05-15 07:04:11.626471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:57.414 [2024-05-15 07:04:11.637990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190fa3a0 00:26:57.414 [2024-05-15 07:04:11.639153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.414 [2024-05-15 07:04:11.639196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:57.672 [2024-05-15 07:04:11.651005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f9b30 00:26:57.672 [2024-05-15 07:04:11.651508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.672 [2024-05-15 07:04:11.651539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:57.672 [2024-05-15 07:04:11.663752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.672 [2024-05-15 07:04:11.665019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.672 [2024-05-15 07:04:11.665052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:57.672 [2024-05-15 07:04:11.676262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.672 [2024-05-15 07:04:11.677525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.672 [2024-05-15 07:04:11.677556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:57.672 [2024-05-15 07:04:11.688765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.672 [2024-05-15 07:04:11.690048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.672 [2024-05-15 07:04:11.690077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:57.672 [2024-05-15 07:04:11.701365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.672 [2024-05-15 07:04:11.702663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.672 [2024-05-15 07:04:11.702694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:57.672 [2024-05-15 07:04:11.713893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.672 [2024-05-15 07:04:11.715166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.672 [2024-05-15 07:04:11.715194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:57.672 [2024-05-15 07:04:11.726339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.672 [2024-05-15 07:04:11.727657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.672 [2024-05-15 07:04:11.727690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:57.672 [2024-05-15 07:04:11.738890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.672 [2024-05-15 07:04:11.740248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.740275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.751458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.673 [2024-05-15 07:04:11.752781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.752813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.763924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.673 [2024-05-15 07:04:11.765346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.765378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.776582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.673 [2024-05-15 07:04:11.777956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.777999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.789200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.673 [2024-05-15 07:04:11.790594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.790626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.801751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.673 [2024-05-15 07:04:11.803142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.803169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.814403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.673 [2024-05-15 07:04:11.815822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.815853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.826914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.673 [2024-05-15 07:04:11.828430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.828462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.839509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.673 [2024-05-15 07:04:11.840908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.840956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.852016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.673 [2024-05-15 07:04:11.853494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.853526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.864562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.673 [2024-05-15 07:04:11.866019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.866046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.877079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f8618 00:26:57.673 [2024-05-15 07:04:11.878565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.878597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.889684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e8d30 00:26:57.673 [2024-05-15 07:04:11.891161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.891189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:57.673 [2024-05-15 07:04:11.902260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190ecc78 00:26:57.673 [2024-05-15 07:04:11.903739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.673 [2024-05-15 07:04:11.903770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:57.931 [2024-05-15 07:04:11.914775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f0788 00:26:57.931 [2024-05-15 07:04:11.916305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.931 [2024-05-15 07:04:11.916337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:57.931 [2024-05-15 07:04:11.927337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190eff18 00:26:57.931 [2024-05-15 07:04:11.928824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.931 [2024-05-15 07:04:11.928855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:57.931 [2024-05-15 07:04:11.939786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190f1868 00:26:57.931 [2024-05-15 07:04:11.941353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.931 [2024-05-15 07:04:11.941384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:57.931 [2024-05-15 07:04:11.952273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd03240) with pdu=0x2000190e5a90 00:26:57.931 [2024-05-15 07:04:11.953299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:57.931 [2024-05-15 07:04:11.953330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:57.931 00:26:57.931 Latency(us) 00:26:57.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.931 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:57.931 nvme0n1 : 2.00 20495.95 80.06 0.00 0.00 6237.38 2827.76 13010.11 00:26:57.931 =================================================================================================================== 00:26:57.931 Total : 20495.95 80.06 0.00 0.00 6237.38 2827.76 13010.11 00:26:57.931 0 00:26:57.931 07:04:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:57.931 07:04:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:57.931 07:04:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:57.931 | .driver_specific 00:26:57.931 | .nvme_error 00:26:57.931 | .status_code 00:26:57.931 | .command_transient_transport_error' 00:26:57.931 07:04:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:58.188 07:04:12 -- host/digest.sh@71 -- # (( 161 > 0 )) 00:26:58.188 07:04:12 -- host/digest.sh@73 -- # killprocess 620801 00:26:58.188 07:04:12 -- common/autotest_common.sh@926 -- # '[' -z 620801 ']' 00:26:58.188 07:04:12 -- common/autotest_common.sh@930 -- # kill -0 620801 00:26:58.188 07:04:12 -- common/autotest_common.sh@931 -- # uname 00:26:58.188 07:04:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:58.188 07:04:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 620801 00:26:58.188 07:04:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:58.188 07:04:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:58.188 07:04:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 620801' 00:26:58.188 killing process with pid 620801 00:26:58.188 07:04:12 -- common/autotest_common.sh@945 -- # kill 620801 00:26:58.188 Received shutdown signal, test time was about 2.000000 seconds 00:26:58.188 00:26:58.188 Latency(us) 00:26:58.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.188 =================================================================================================================== 00:26:58.188 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:58.188 07:04:12 -- common/autotest_common.sh@950 -- # wait 620801 00:26:58.446 07:04:12 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:26:58.446 07:04:12 -- host/digest.sh@54 -- # local rw bs qd 00:26:58.446 07:04:12 -- host/digest.sh@56 -- # rw=randwrite 00:26:58.446 07:04:12 -- host/digest.sh@56 -- # bs=131072 00:26:58.446 07:04:12 -- host/digest.sh@56 -- # qd=16 00:26:58.446 07:04:12 -- host/digest.sh@58 -- # bperfpid=621351 00:26:58.446 07:04:12 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:58.446 07:04:12 -- host/digest.sh@60 -- # waitforlisten 621351 /var/tmp/bperf.sock 00:26:58.446 07:04:12 -- common/autotest_common.sh@819 -- # '[' -z 621351 ']' 00:26:58.446 07:04:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:58.446 07:04:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:58.446 07:04:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:58.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:58.446 07:04:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:58.446 07:04:12 -- common/autotest_common.sh@10 -- # set +x 00:26:58.447 [2024-05-15 07:04:12.544846] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:26:58.447 [2024-05-15 07:04:12.544924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621351 ] 00:26:58.447 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.447 Zero copy mechanism will not be used. 00:26:58.447 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.447 [2024-05-15 07:04:12.617394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.705 [2024-05-15 07:04:12.729704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.267 07:04:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:59.267 07:04:13 -- common/autotest_common.sh@852 -- # return 0 00:26:59.267 07:04:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:59.267 07:04:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:59.524 07:04:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:59.524 07:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.524 07:04:13 -- common/autotest_common.sh@10 -- # set +x 00:26:59.524 07:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.524 07:04:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:59.524 07:04:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.090 nvme0n1 00:27:00.090 07:04:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:00.090 07:04:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:00.090 07:04:14 -- common/autotest_common.sh@10 -- # set +x 00:27:00.090 07:04:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:00.090 07:04:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:00.090 07:04:14 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:00.090 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:00.090 Zero copy mechanism will not be used. 00:27:00.090 Running I/O for 2 seconds... 00:27:00.090 [2024-05-15 07:04:14.303505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.090 [2024-05-15 07:04:14.303906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.090 [2024-05-15 07:04:14.303957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.348 [2024-05-15 07:04:14.329023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.348 [2024-05-15 07:04:14.329721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.348 [2024-05-15 07:04:14.329754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.348 [2024-05-15 07:04:14.357095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.348 [2024-05-15 07:04:14.357747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.348 [2024-05-15 07:04:14.357778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.348 [2024-05-15 07:04:14.384558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.348 [2024-05-15 07:04:14.385230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.348 [2024-05-15 07:04:14.385275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.348 [2024-05-15 07:04:14.413107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.348 [2024-05-15 07:04:14.413927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.348 [2024-05-15 07:04:14.413979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.348 [2024-05-15 07:04:14.438107] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.348 [2024-05-15 07:04:14.438672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.348 [2024-05-15 07:04:14.438715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.348 [2024-05-15 07:04:14.464534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.348 [2024-05-15 07:04:14.465391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.348 [2024-05-15 07:04:14.465420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.348 [2024-05-15 07:04:14.492411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.348 [2024-05-15 07:04:14.493458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.348 [2024-05-15 07:04:14.493487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.348 [2024-05-15 07:04:14.518373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.348 [2024-05-15 07:04:14.519109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.348 [2024-05-15 07:04:14.519140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.348 [2024-05-15 07:04:14.545328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.348 [2024-05-15 07:04:14.546319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.348 [2024-05-15 07:04:14.546347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.348 [2024-05-15 07:04:14.572715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.348 [2024-05-15 07:04:14.573435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.348 [2024-05-15 07:04:14.573462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.606 [2024-05-15 07:04:14.598523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.606 [2024-05-15 07:04:14.599279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.606 [2024-05-15 07:04:14.599307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.606 [2024-05-15 07:04:14.628380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.606 [2024-05-15 07:04:14.629116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.606 [2024-05-15 07:04:14.629146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.606 [2024-05-15 07:04:14.653456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.606 [2024-05-15 07:04:14.654268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.606 [2024-05-15 07:04:14.654296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.606 [2024-05-15 07:04:14.677830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.606 [2024-05-15 07:04:14.678588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.606 [2024-05-15 07:04:14.678615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.606 [2024-05-15 07:04:14.704983] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.606 [2024-05-15 07:04:14.705732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.606 [2024-05-15 07:04:14.705760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.606 [2024-05-15 07:04:14.731152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.606 [2024-05-15 07:04:14.731807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.606 [2024-05-15 07:04:14.731835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.606 [2024-05-15 07:04:14.757900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.606 [2024-05-15 07:04:14.758733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.606 [2024-05-15 07:04:14.758762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.606 [2024-05-15 07:04:14.782084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.606 [2024-05-15 07:04:14.782643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.606 [2024-05-15 07:04:14.782671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.606 [2024-05-15 07:04:14.808081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.606 [2024-05-15 07:04:14.809001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.606 [2024-05-15 07:04:14.809030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.606 [2024-05-15 07:04:14.834898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.606 [2024-05-15 07:04:14.835631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.606 [2024-05-15 07:04:14.835659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.864 [2024-05-15 07:04:14.862857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.864 [2024-05-15 07:04:14.863909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.864 [2024-05-15 07:04:14.863945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.864 [2024-05-15 07:04:14.889998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.864 [2024-05-15 07:04:14.890973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.864 [2024-05-15 07:04:14.891002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.864 [2024-05-15 07:04:14.914653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.864 [2024-05-15 07:04:14.915496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.864 [2024-05-15 07:04:14.915523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.864 [2024-05-15 07:04:14.942414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.864 [2024-05-15 07:04:14.943074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.864 [2024-05-15 07:04:14.943109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.864 [2024-05-15 07:04:14.966127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.864 [2024-05-15 07:04:14.966782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.864 [2024-05-15 07:04:14.966810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.864 [2024-05-15 07:04:14.990559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.864 [2024-05-15 07:04:14.991385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.864 [2024-05-15 07:04:14.991411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:00.864 [2024-05-15 07:04:15.013897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.864 [2024-05-15 07:04:15.014568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.864 [2024-05-15 07:04:15.014595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:00.864 [2024-05-15 07:04:15.041174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.864 [2024-05-15 07:04:15.041945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.864 [2024-05-15 07:04:15.041974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:00.864 [2024-05-15 07:04:15.066399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.864 [2024-05-15 07:04:15.067014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.864 [2024-05-15 07:04:15.067041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:00.864 [2024-05-15 07:04:15.090908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:00.864 [2024-05-15 07:04:15.091483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.864 [2024-05-15 07:04:15.091510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.123 [2024-05-15 07:04:15.117884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.123 [2024-05-15 07:04:15.118536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.123 [2024-05-15 07:04:15.118563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.123 [2024-05-15 07:04:15.144847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.123 [2024-05-15 07:04:15.145390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.123 [2024-05-15 07:04:15.145418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.123 [2024-05-15 07:04:15.171626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.123 [2024-05-15 07:04:15.172251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.123 [2024-05-15 07:04:15.172293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.123 [2024-05-15 07:04:15.198407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.123 [2024-05-15 07:04:15.199191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.123 [2024-05-15 07:04:15.199219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.123 [2024-05-15 07:04:15.223977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.123 [2024-05-15 07:04:15.224504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.123 [2024-05-15 07:04:15.224532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.123 [2024-05-15 07:04:15.251129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.123 [2024-05-15 07:04:15.251759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.123 [2024-05-15 07:04:15.251787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.123 [2024-05-15 07:04:15.278974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.123 [2024-05-15 07:04:15.279751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.123 [2024-05-15 07:04:15.279779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.123 [2024-05-15 07:04:15.306323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.123 [2024-05-15 07:04:15.307051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.123 [2024-05-15 07:04:15.307080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.123 [2024-05-15 07:04:15.333902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.123 [2024-05-15 07:04:15.334562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.123 [2024-05-15 07:04:15.334590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.381 [2024-05-15 07:04:15.358433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.381 [2024-05-15 07:04:15.359021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.381 [2024-05-15 07:04:15.359050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.381 [2024-05-15 07:04:15.383705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.381 [2024-05-15 07:04:15.384872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.381 [2024-05-15 07:04:15.384900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.381 [2024-05-15 07:04:15.411358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.381 [2024-05-15 07:04:15.412117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.381 [2024-05-15 07:04:15.412145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.381 [2024-05-15 07:04:15.437632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.381 [2024-05-15 07:04:15.438359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.381 [2024-05-15 07:04:15.438387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.381 [2024-05-15 07:04:15.464128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.381 [2024-05-15 07:04:15.464975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.381 [2024-05-15 07:04:15.465004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.381 [2024-05-15 07:04:15.490121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.381 [2024-05-15 07:04:15.490745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.381 [2024-05-15 07:04:15.490772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.381 [2024-05-15 07:04:15.515999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.381 [2024-05-15 07:04:15.516706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.381 [2024-05-15 07:04:15.516733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.381 [2024-05-15 07:04:15.543390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.381 [2024-05-15 07:04:15.544276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.381 [2024-05-15 07:04:15.544304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.381 [2024-05-15 07:04:15.569866] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.381 [2024-05-15 07:04:15.570596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.381 [2024-05-15 07:04:15.570625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.381 [2024-05-15 07:04:15.597589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.381 [2024-05-15 07:04:15.598339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.381 [2024-05-15 07:04:15.598366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.639 [2024-05-15 07:04:15.623218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.639 [2024-05-15 07:04:15.623963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.639 [2024-05-15 07:04:15.623999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.639 [2024-05-15 07:04:15.650836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.639 [2024-05-15 07:04:15.651777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.639 [2024-05-15 07:04:15.651805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.639 [2024-05-15 07:04:15.679154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.639 [2024-05-15 07:04:15.679961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.639 [2024-05-15 07:04:15.679990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.639 [2024-05-15 07:04:15.707346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.639 [2024-05-15 07:04:15.708122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.639 [2024-05-15 07:04:15.708151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.639 [2024-05-15 07:04:15.735290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.639 [2024-05-15 07:04:15.736132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.639 [2024-05-15 07:04:15.736160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.639 [2024-05-15 07:04:15.762099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.639 [2024-05-15 07:04:15.762799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.639 [2024-05-15 07:04:15.762826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.639 [2024-05-15 07:04:15.786186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.639 [2024-05-15 07:04:15.787016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.639 [2024-05-15 07:04:15.787046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.639 [2024-05-15 07:04:15.811995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.639 [2024-05-15 07:04:15.812524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.639 [2024-05-15 07:04:15.812552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.639 [2024-05-15 07:04:15.835333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.639 [2024-05-15 07:04:15.836171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.639 [2024-05-15 07:04:15.836201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.639 [2024-05-15 07:04:15.862189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.639 [2024-05-15 07:04:15.862971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.639 [2024-05-15 07:04:15.863000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.897 [2024-05-15 07:04:15.885987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.897 [2024-05-15 07:04:15.886731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.897 [2024-05-15 07:04:15.886760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.897 [2024-05-15 07:04:15.907149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.897 [2024-05-15 07:04:15.907689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.897 [2024-05-15 07:04:15.907716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.897 [2024-05-15 07:04:15.933951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.897 [2024-05-15 07:04:15.934417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.897 [2024-05-15 07:04:15.934445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.897 [2024-05-15 07:04:15.960066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.897 [2024-05-15 07:04:15.960764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.897 [2024-05-15 07:04:15.960791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.897 [2024-05-15 07:04:15.987845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.897 [2024-05-15 07:04:15.988522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.897 [2024-05-15 07:04:15.988551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.897 [2024-05-15 07:04:16.014655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.897 [2024-05-15 07:04:16.015307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.897 [2024-05-15 07:04:16.015335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:01.897 [2024-05-15 07:04:16.042352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.897 [2024-05-15 07:04:16.043021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.897 [2024-05-15 07:04:16.043049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:01.897 [2024-05-15 07:04:16.070591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.897 [2024-05-15 07:04:16.071497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.897 [2024-05-15 07:04:16.071524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.897 [2024-05-15 07:04:16.099177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.897 [2024-05-15 07:04:16.099819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.897 [2024-05-15 07:04:16.099848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:01.897 [2024-05-15 07:04:16.125417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:01.897 [2024-05-15 07:04:16.126195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.897 [2024-05-15 07:04:16.126237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.155 [2024-05-15 07:04:16.152217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:02.155 [2024-05-15 07:04:16.152791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.155 [2024-05-15 07:04:16.152818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.155 [2024-05-15 07:04:16.178527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:02.155 [2024-05-15 07:04:16.179305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.155 [2024-05-15 07:04:16.179333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.155 [2024-05-15 07:04:16.206031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:02.155 [2024-05-15 07:04:16.206762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.155 [2024-05-15 07:04:16.206790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:02.155 [2024-05-15 07:04:16.232889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:02.155 [2024-05-15 07:04:16.233624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.155 [2024-05-15 07:04:16.233652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:02.155 [2024-05-15 07:04:16.258890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:02.155 [2024-05-15 07:04:16.259547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.155 [2024-05-15 07:04:16.259575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:02.155 [2024-05-15 07:04:16.285447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd033e0) with pdu=0x2000190fef90 00:27:02.155 [2024-05-15 07:04:16.286297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.155 [2024-05-15 07:04:16.286324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.155 00:27:02.155 Latency(us) 00:27:02.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.155 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:02.155 nvme0n1 : 2.01 1175.51 146.94 0.00 0.00 13562.07 4053.52 30292.20 00:27:02.155 =================================================================================================================== 00:27:02.155 Total : 1175.51 146.94 0.00 0.00 13562.07 4053.52 30292.20 00:27:02.155 0 00:27:02.155 07:04:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:02.155 07:04:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:02.155 07:04:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:02.155 07:04:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:02.155 | .driver_specific 00:27:02.155 | .nvme_error 00:27:02.155 | .status_code 00:27:02.155 | .command_transient_transport_error' 00:27:02.413 07:04:16 -- host/digest.sh@71 -- # (( 76 > 0 )) 00:27:02.413 07:04:16 -- host/digest.sh@73 -- # killprocess 621351 00:27:02.413 07:04:16 -- common/autotest_common.sh@926 -- # '[' -z 621351 ']' 00:27:02.413 07:04:16 -- common/autotest_common.sh@930 -- # kill -0 621351 00:27:02.413 07:04:16 -- common/autotest_common.sh@931 -- # uname 00:27:02.413 07:04:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:02.413 07:04:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 621351 00:27:02.413 07:04:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:02.413 07:04:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:02.413 07:04:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 621351' 00:27:02.413 killing process with pid 621351 00:27:02.413 07:04:16 -- common/autotest_common.sh@945 -- # kill 621351 00:27:02.413 Received shutdown signal, test time was about 2.000000 seconds 00:27:02.413 00:27:02.413 Latency(us) 00:27:02.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.413 =================================================================================================================== 00:27:02.413 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:02.413 07:04:16 -- common/autotest_common.sh@950 -- # wait 621351 00:27:02.670 07:04:16 -- host/digest.sh@115 -- # killprocess 619669 00:27:02.670 07:04:16 -- common/autotest_common.sh@926 -- # '[' -z 619669 ']' 00:27:02.670 07:04:16 -- common/autotest_common.sh@930 -- # kill -0 619669 00:27:02.671 07:04:16 -- common/autotest_common.sh@931 -- # uname 00:27:02.671 07:04:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:02.671 07:04:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 619669 00:27:02.671 07:04:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:02.671 07:04:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:02.671 07:04:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 619669' 00:27:02.671 killing process with pid 619669 00:27:02.671 07:04:16 -- common/autotest_common.sh@945 -- # kill 619669 00:27:02.671 07:04:16 -- common/autotest_common.sh@950 -- # wait 619669 00:27:02.929 00:27:02.929 real 0m18.198s 00:27:02.929 user 0m37.278s 00:27:02.929 sys 0m3.952s 00:27:02.929 07:04:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:02.929 07:04:17 -- common/autotest_common.sh@10 -- # set +x 00:27:02.929 ************************************ 00:27:02.929 END TEST nvmf_digest_error 00:27:02.929 ************************************ 00:27:02.929 07:04:17 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:27:02.929 07:04:17 -- host/digest.sh@139 -- # nvmftestfini 00:27:02.929 07:04:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:02.929 07:04:17 -- nvmf/common.sh@116 -- # sync 00:27:02.929 07:04:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:02.929 07:04:17 -- nvmf/common.sh@119 -- # set +e 00:27:02.929 07:04:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:02.929 07:04:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:02.929 rmmod nvme_tcp 00:27:02.929 rmmod nvme_fabrics 00:27:03.188 rmmod nvme_keyring 00:27:03.188 07:04:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:03.188 07:04:17 -- nvmf/common.sh@123 -- # set -e 00:27:03.188 07:04:17 -- nvmf/common.sh@124 -- # return 0 00:27:03.188 07:04:17 -- nvmf/common.sh@477 -- # '[' -n 619669 ']' 00:27:03.188 07:04:17 -- nvmf/common.sh@478 -- # killprocess 619669 00:27:03.188 07:04:17 -- common/autotest_common.sh@926 -- # '[' -z 619669 ']' 00:27:03.188 07:04:17 -- common/autotest_common.sh@930 -- # kill -0 619669 00:27:03.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (619669) - No such process 00:27:03.188 07:04:17 -- common/autotest_common.sh@953 -- # echo 'Process with pid 619669 is not found' 00:27:03.188 Process with pid 619669 is not found 00:27:03.188 07:04:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:03.188 07:04:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:03.188 07:04:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:03.188 07:04:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.188 07:04:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:03.188 07:04:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.188 07:04:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.188 07:04:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.091 07:04:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:05.091 00:27:05.091 real 0m38.914s 00:27:05.091 user 1m10.318s 00:27:05.091 sys 0m9.638s 00:27:05.091 07:04:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.091 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.091 ************************************ 00:27:05.091 END TEST nvmf_digest 00:27:05.091 ************************************ 00:27:05.091 07:04:19 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:27:05.091 07:04:19 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:27:05.091 07:04:19 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:27:05.091 07:04:19 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:05.091 07:04:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:05.091 07:04:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:05.091 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:27:05.091 ************************************ 00:27:05.091 START TEST nvmf_bdevperf 00:27:05.091 ************************************ 00:27:05.091 07:04:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:05.091 * Looking for test storage... 00:27:05.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:05.091 07:04:19 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.091 07:04:19 -- nvmf/common.sh@7 -- # uname -s 00:27:05.091 07:04:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.091 07:04:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.091 07:04:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.091 07:04:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.091 07:04:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.091 07:04:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.091 07:04:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.091 07:04:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.091 07:04:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.091 07:04:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.091 07:04:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:05.091 07:04:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:05.091 07:04:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.091 07:04:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.091 07:04:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.091 07:04:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.349 07:04:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.349 07:04:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.349 07:04:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.349 07:04:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.349 07:04:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.349 07:04:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.349 07:04:19 -- paths/export.sh@5 -- # export PATH 00:27:05.349 07:04:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.349 07:04:19 -- nvmf/common.sh@46 -- # : 0 00:27:05.349 07:04:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:05.349 07:04:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:05.349 07:04:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:05.349 07:04:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.349 07:04:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.349 07:04:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:05.349 07:04:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:05.349 07:04:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:05.349 07:04:19 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:05.349 07:04:19 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:05.349 07:04:19 -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:05.349 07:04:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:05.349 07:04:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.349 07:04:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:05.349 07:04:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:05.349 07:04:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:05.349 07:04:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.349 07:04:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.349 07:04:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.349 07:04:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:05.349 07:04:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:05.349 07:04:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:05.349 07:04:19 -- common/autotest_common.sh@10 -- # set +x 00:27:07.874 07:04:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:07.874 07:04:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:07.874 07:04:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:07.874 07:04:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:07.874 07:04:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:07.874 07:04:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:07.874 07:04:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:07.874 07:04:21 -- nvmf/common.sh@294 -- # net_devs=() 00:27:07.874 07:04:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:07.874 07:04:21 -- nvmf/common.sh@295 -- # e810=() 00:27:07.874 07:04:21 -- nvmf/common.sh@295 -- # local -ga e810 00:27:07.874 07:04:21 -- nvmf/common.sh@296 -- # x722=() 00:27:07.874 07:04:21 -- nvmf/common.sh@296 -- # local -ga x722 00:27:07.874 07:04:21 -- nvmf/common.sh@297 -- # mlx=() 00:27:07.874 07:04:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:07.874 07:04:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.874 07:04:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.874 07:04:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.874 07:04:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.874 07:04:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.874 07:04:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.874 07:04:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.874 07:04:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.874 07:04:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.874 07:04:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.874 07:04:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.874 07:04:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:07.874 07:04:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:07.874 07:04:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:07.874 07:04:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:07.874 07:04:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:07.874 07:04:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:07.874 07:04:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:07.875 07:04:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:07.875 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:07.875 07:04:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:07.875 07:04:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:07.875 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:07.875 07:04:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:07.875 07:04:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:07.875 07:04:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.875 07:04:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:07.875 07:04:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.875 07:04:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:07.875 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:07.875 07:04:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.875 07:04:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:07.875 07:04:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.875 07:04:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:07.875 07:04:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.875 07:04:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:07.875 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:07.875 07:04:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.875 07:04:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:07.875 07:04:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:07.875 07:04:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:07.875 07:04:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.875 07:04:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.875 07:04:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.875 07:04:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:07.875 07:04:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.875 07:04:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.875 07:04:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:07.875 07:04:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.875 07:04:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.875 07:04:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:07.875 07:04:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:07.875 07:04:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.875 07:04:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.875 07:04:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.875 07:04:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.875 07:04:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:07.875 07:04:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.875 07:04:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.875 07:04:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.875 07:04:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:07.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:27:07.875 00:27:07.875 --- 10.0.0.2 ping statistics --- 00:27:07.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.875 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:07.875 07:04:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:27:07.875 00:27:07.875 --- 10.0.0.1 ping statistics --- 00:27:07.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.875 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:27:07.875 07:04:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.875 07:04:21 -- nvmf/common.sh@410 -- # return 0 00:27:07.875 07:04:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:07.875 07:04:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.875 07:04:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:07.875 07:04:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.875 07:04:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:07.875 07:04:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:07.875 07:04:21 -- host/bdevperf.sh@25 -- # tgt_init 00:27:07.875 07:04:21 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:07.875 07:04:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:07.875 07:04:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:07.875 07:04:21 -- common/autotest_common.sh@10 -- # set +x 00:27:07.875 07:04:21 -- nvmf/common.sh@469 -- # nvmfpid=624154 00:27:07.875 07:04:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:07.875 07:04:21 -- nvmf/common.sh@470 -- # waitforlisten 624154 00:27:07.875 07:04:21 -- common/autotest_common.sh@819 -- # '[' -z 624154 ']' 00:27:07.875 07:04:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.875 07:04:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:07.875 07:04:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.875 07:04:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:07.875 07:04:21 -- common/autotest_common.sh@10 -- # set +x 00:27:07.875 [2024-05-15 07:04:21.936167] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:07.875 [2024-05-15 07:04:21.936257] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.875 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.875 [2024-05-15 07:04:22.012422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:08.132 [2024-05-15 07:04:22.123039] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:08.132 [2024-05-15 07:04:22.123191] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.132 [2024-05-15 07:04:22.123209] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.132 [2024-05-15 07:04:22.123222] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.132 [2024-05-15 07:04:22.123274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.132 [2024-05-15 07:04:22.123320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.132 [2024-05-15 07:04:22.123322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.697 07:04:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:08.697 07:04:22 -- common/autotest_common.sh@852 -- # return 0 00:27:08.697 07:04:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:08.697 07:04:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:08.697 07:04:22 -- common/autotest_common.sh@10 -- # set +x 00:27:08.697 07:04:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.697 07:04:22 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.697 07:04:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.698 07:04:22 -- common/autotest_common.sh@10 -- # set +x 00:27:08.698 [2024-05-15 07:04:22.911763] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.698 07:04:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.698 07:04:22 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:08.698 07:04:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.698 07:04:22 -- common/autotest_common.sh@10 -- # set +x 00:27:08.956 Malloc0 00:27:08.956 07:04:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.956 07:04:22 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:08.956 07:04:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.956 07:04:22 -- common/autotest_common.sh@10 -- # set +x 00:27:08.956 07:04:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.956 07:04:22 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:08.956 07:04:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.956 07:04:22 -- common/autotest_common.sh@10 -- # set +x 00:27:08.956 07:04:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.956 07:04:22 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.956 07:04:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:08.956 07:04:22 -- common/autotest_common.sh@10 -- # set +x 00:27:08.956 [2024-05-15 07:04:22.980846] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.956 07:04:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:08.956 07:04:22 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:08.956 07:04:22 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:08.956 07:04:22 -- nvmf/common.sh@520 -- # config=() 00:27:08.956 07:04:22 -- nvmf/common.sh@520 -- # local subsystem config 00:27:08.956 07:04:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:08.956 07:04:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:08.956 { 00:27:08.956 "params": { 00:27:08.956 "name": "Nvme$subsystem", 00:27:08.956 "trtype": "$TEST_TRANSPORT", 00:27:08.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.956 "adrfam": "ipv4", 00:27:08.956 "trsvcid": "$NVMF_PORT", 00:27:08.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.956 "hdgst": ${hdgst:-false}, 00:27:08.956 "ddgst": ${ddgst:-false} 00:27:08.956 }, 00:27:08.956 "method": "bdev_nvme_attach_controller" 00:27:08.956 } 00:27:08.956 EOF 00:27:08.956 )") 00:27:08.956 07:04:22 -- nvmf/common.sh@542 -- # cat 00:27:08.956 07:04:22 -- nvmf/common.sh@544 -- # jq . 00:27:08.956 07:04:22 -- nvmf/common.sh@545 -- # IFS=, 00:27:08.956 07:04:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:08.956 "params": { 00:27:08.956 "name": "Nvme1", 00:27:08.956 "trtype": "tcp", 00:27:08.956 "traddr": "10.0.0.2", 00:27:08.956 "adrfam": "ipv4", 00:27:08.956 "trsvcid": "4420", 00:27:08.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:08.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:08.956 "hdgst": false, 00:27:08.956 "ddgst": false 00:27:08.956 }, 00:27:08.956 "method": "bdev_nvme_attach_controller" 00:27:08.956 }' 00:27:08.956 [2024-05-15 07:04:23.025124] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:08.956 [2024-05-15 07:04:23.025193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624316 ] 00:27:08.956 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.956 [2024-05-15 07:04:23.094773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.213 [2024-05-15 07:04:23.207564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.213 Running I/O for 1 seconds... 00:27:10.582 00:27:10.582 Latency(us) 00:27:10.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.582 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:10.582 Verification LBA range: start 0x0 length 0x4000 00:27:10.582 Nvme1n1 : 1.01 13278.41 51.87 0.00 0.00 9599.00 1304.65 16311.18 00:27:10.582 =================================================================================================================== 00:27:10.582 Total : 13278.41 51.87 0.00 0.00 9599.00 1304.65 16311.18 00:27:10.582 07:04:24 -- host/bdevperf.sh@30 -- # bdevperfpid=624461 00:27:10.582 07:04:24 -- host/bdevperf.sh@32 -- # sleep 3 00:27:10.582 07:04:24 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:10.582 07:04:24 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:10.582 07:04:24 -- nvmf/common.sh@520 -- # config=() 00:27:10.582 07:04:24 -- nvmf/common.sh@520 -- # local subsystem config 00:27:10.582 07:04:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:10.582 07:04:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:10.582 { 00:27:10.582 "params": { 00:27:10.582 "name": "Nvme$subsystem", 00:27:10.582 "trtype": "$TEST_TRANSPORT", 00:27:10.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.582 "adrfam": "ipv4", 00:27:10.582 "trsvcid": "$NVMF_PORT", 00:27:10.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.582 "hdgst": ${hdgst:-false}, 00:27:10.582 "ddgst": ${ddgst:-false} 00:27:10.582 }, 00:27:10.582 "method": "bdev_nvme_attach_controller" 00:27:10.582 } 00:27:10.582 EOF 00:27:10.582 )") 00:27:10.582 07:04:24 -- nvmf/common.sh@542 -- # cat 00:27:10.582 07:04:24 -- nvmf/common.sh@544 -- # jq . 00:27:10.582 07:04:24 -- nvmf/common.sh@545 -- # IFS=, 00:27:10.582 07:04:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:10.582 "params": { 00:27:10.582 "name": "Nvme1", 00:27:10.582 "trtype": "tcp", 00:27:10.582 "traddr": "10.0.0.2", 00:27:10.582 "adrfam": "ipv4", 00:27:10.582 "trsvcid": "4420", 00:27:10.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:10.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:10.582 "hdgst": false, 00:27:10.582 "ddgst": false 00:27:10.582 }, 00:27:10.582 "method": "bdev_nvme_attach_controller" 00:27:10.582 }' 00:27:10.582 [2024-05-15 07:04:24.721358] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:10.582 [2024-05-15 07:04:24.721438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624461 ] 00:27:10.582 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.582 [2024-05-15 07:04:24.791051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.839 [2024-05-15 07:04:24.900588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.096 Running I/O for 15 seconds... 00:27:13.629 07:04:27 -- host/bdevperf.sh@33 -- # kill -9 624154 00:27:13.629 07:04:27 -- host/bdevperf.sh@35 -- # sleep 3 00:27:13.629 [2024-05-15 07:04:27.695991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:125072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.696978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.696995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.697009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.629 [2024-05-15 07:04:27.697024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.629 [2024-05-15 07:04:27.697042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.630 [2024-05-15 07:04:27.697800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.697869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.630 [2024-05-15 07:04:27.697904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.630 [2024-05-15 07:04:27.697944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.697977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.630 [2024-05-15 07:04:27.697992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.698008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.698023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.698038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.698052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.698067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.698081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.698096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.698110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.698125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.698139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.698154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.698168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.698183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.698198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.630 [2024-05-15 07:04:27.698228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.630 [2024-05-15 07:04:27.698245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.698351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.698384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.698483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.698583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.698615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.698681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.698818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.698852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.698959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.698992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.699007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.699036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.699065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.699095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.699124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.699160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.699190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.699241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.699275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.699307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.699341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.699373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.699407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.631 [2024-05-15 07:04:27.699440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.631 [2024-05-15 07:04:27.699457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.631 [2024-05-15 07:04:27.699472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.699506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.632 [2024-05-15 07:04:27.699538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.632 [2024-05-15 07:04:27.699570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.699607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.699641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.699673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.699707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:126232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.632 [2024-05-15 07:04:27.699739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.632 [2024-05-15 07:04:27.699772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.632 [2024-05-15 07:04:27.699804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.632 [2024-05-15 07:04:27.699836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.632 [2024-05-15 07:04:27.699868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.632 [2024-05-15 07:04:27.699901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.632 [2024-05-15 07:04:27.699940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.699959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.699975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.632 [2024-05-15 07:04:27.700024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.700054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.632 [2024-05-15 07:04:27.700083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.700113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.700142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.700171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.700201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.700253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.700285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.700318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.632 [2024-05-15 07:04:27.700350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6753a0 is same with the state(5) to be set 00:27:13.632 [2024-05-15 07:04:27.700384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:13.632 [2024-05-15 07:04:27.700398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:13.632 [2024-05-15 07:04:27.700411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125888 len:8 PRP1 0x0 PRP2 0x0 00:27:13.632 [2024-05-15 07:04:27.700426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:13.632 [2024-05-15 07:04:27.700496] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6753a0 was disconnected and freed. reset controller. 00:27:13.632 [2024-05-15 07:04:27.702925] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.632 [2024-05-15 07:04:27.703015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.632 [2024-05-15 07:04:27.703632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-05-15 07:04:27.703894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-05-15 07:04:27.703922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.632 [2024-05-15 07:04:27.703949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.632 [2024-05-15 07:04:27.704103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.632 [2024-05-15 07:04:27.704224] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.632 [2024-05-15 07:04:27.704246] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.632 [2024-05-15 07:04:27.704264] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.632 [2024-05-15 07:04:27.706732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.632 [2024-05-15 07:04:27.715629] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.632 [2024-05-15 07:04:27.716083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-05-15 07:04:27.716307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.632 [2024-05-15 07:04:27.716335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.632 [2024-05-15 07:04:27.716353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.632 [2024-05-15 07:04:27.716501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.632 [2024-05-15 07:04:27.716635] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.632 [2024-05-15 07:04:27.716657] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.632 [2024-05-15 07:04:27.716673] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.632 [2024-05-15 07:04:27.719051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.633 [2024-05-15 07:04:27.728233] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.633 [2024-05-15 07:04:27.728677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.728993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.729023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.633 [2024-05-15 07:04:27.729041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.633 [2024-05-15 07:04:27.729189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.633 [2024-05-15 07:04:27.729304] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.633 [2024-05-15 07:04:27.729327] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.633 [2024-05-15 07:04:27.729343] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.633 [2024-05-15 07:04:27.731714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.633 [2024-05-15 07:04:27.740732] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.633 [2024-05-15 07:04:27.741155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.741380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.741408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.633 [2024-05-15 07:04:27.741425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.633 [2024-05-15 07:04:27.741609] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.633 [2024-05-15 07:04:27.741742] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.633 [2024-05-15 07:04:27.741764] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.633 [2024-05-15 07:04:27.741780] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.633 [2024-05-15 07:04:27.744185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.633 [2024-05-15 07:04:27.753330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.633 [2024-05-15 07:04:27.753751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.753961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.753987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.633 [2024-05-15 07:04:27.754003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.633 [2024-05-15 07:04:27.754175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.633 [2024-05-15 07:04:27.754326] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.633 [2024-05-15 07:04:27.754348] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.633 [2024-05-15 07:04:27.754363] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.633 [2024-05-15 07:04:27.756543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.633 [2024-05-15 07:04:27.766010] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.633 [2024-05-15 07:04:27.766374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.766593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.766620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.633 [2024-05-15 07:04:27.766638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.633 [2024-05-15 07:04:27.766840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.633 [2024-05-15 07:04:27.767002] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.633 [2024-05-15 07:04:27.767026] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.633 [2024-05-15 07:04:27.767041] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.633 [2024-05-15 07:04:27.769509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.633 [2024-05-15 07:04:27.778480] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.633 [2024-05-15 07:04:27.778924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.779155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.779183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.633 [2024-05-15 07:04:27.779200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.633 [2024-05-15 07:04:27.779402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.633 [2024-05-15 07:04:27.779607] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.633 [2024-05-15 07:04:27.779629] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.633 [2024-05-15 07:04:27.779644] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.633 [2024-05-15 07:04:27.781915] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.633 [2024-05-15 07:04:27.791366] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.633 [2024-05-15 07:04:27.791787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.792017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.792044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.633 [2024-05-15 07:04:27.792059] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.633 [2024-05-15 07:04:27.792230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.633 [2024-05-15 07:04:27.792417] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.633 [2024-05-15 07:04:27.792440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.633 [2024-05-15 07:04:27.792456] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.633 [2024-05-15 07:04:27.794670] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.633 [2024-05-15 07:04:27.804048] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.633 [2024-05-15 07:04:27.804434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.804677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.804705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.633 [2024-05-15 07:04:27.804723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.633 [2024-05-15 07:04:27.804870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.633 [2024-05-15 07:04:27.805032] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.633 [2024-05-15 07:04:27.805056] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.633 [2024-05-15 07:04:27.805071] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.633 [2024-05-15 07:04:27.807250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.633 [2024-05-15 07:04:27.816491] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.633 [2024-05-15 07:04:27.816920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.817152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.633 [2024-05-15 07:04:27.817181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.633 [2024-05-15 07:04:27.817198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.633 [2024-05-15 07:04:27.817364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.633 [2024-05-15 07:04:27.817551] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.633 [2024-05-15 07:04:27.817574] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.633 [2024-05-15 07:04:27.817589] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.633 [2024-05-15 07:04:27.819750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.634 [2024-05-15 07:04:27.829286] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.634 [2024-05-15 07:04:27.829703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-05-15 07:04:27.829926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-05-15 07:04:27.829968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.634 [2024-05-15 07:04:27.829986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.634 [2024-05-15 07:04:27.830170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.634 [2024-05-15 07:04:27.830321] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.634 [2024-05-15 07:04:27.830343] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.634 [2024-05-15 07:04:27.830359] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.634 [2024-05-15 07:04:27.832593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.634 [2024-05-15 07:04:27.841868] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.634 [2024-05-15 07:04:27.842208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-05-15 07:04:27.842412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-05-15 07:04:27.842440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.634 [2024-05-15 07:04:27.842457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.634 [2024-05-15 07:04:27.842623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.634 [2024-05-15 07:04:27.842828] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.634 [2024-05-15 07:04:27.842851] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.634 [2024-05-15 07:04:27.842866] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.634 [2024-05-15 07:04:27.845233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.634 [2024-05-15 07:04:27.854375] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.634 [2024-05-15 07:04:27.854802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-05-15 07:04:27.855051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.634 [2024-05-15 07:04:27.855085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.634 [2024-05-15 07:04:27.855104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.634 [2024-05-15 07:04:27.855234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.634 [2024-05-15 07:04:27.855421] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.634 [2024-05-15 07:04:27.855443] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.634 [2024-05-15 07:04:27.855459] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.634 [2024-05-15 07:04:27.857903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.894 [2024-05-15 07:04:27.866846] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.894 [2024-05-15 07:04:27.867242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.894 [2024-05-15 07:04:27.867595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.894 [2024-05-15 07:04:27.867627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.894 [2024-05-15 07:04:27.867645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.894 [2024-05-15 07:04:27.867813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.894 [2024-05-15 07:04:27.867993] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.894 [2024-05-15 07:04:27.868017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.894 [2024-05-15 07:04:27.868032] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.894 [2024-05-15 07:04:27.870336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.894 [2024-05-15 07:04:27.879531] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.894 [2024-05-15 07:04:27.879896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.894 [2024-05-15 07:04:27.880110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.894 [2024-05-15 07:04:27.880140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.894 [2024-05-15 07:04:27.880158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.894 [2024-05-15 07:04:27.880343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.894 [2024-05-15 07:04:27.880512] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.894 [2024-05-15 07:04:27.880535] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.894 [2024-05-15 07:04:27.880551] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.894 [2024-05-15 07:04:27.882855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.894 [2024-05-15 07:04:27.892136] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.894 [2024-05-15 07:04:27.892639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.894 [2024-05-15 07:04:27.892944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.894 [2024-05-15 07:04:27.892973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.894 [2024-05-15 07:04:27.892996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.894 [2024-05-15 07:04:27.893180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.895 [2024-05-15 07:04:27.893349] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.895 [2024-05-15 07:04:27.893372] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.895 [2024-05-15 07:04:27.893388] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.895 [2024-05-15 07:04:27.895889] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.895 [2024-05-15 07:04:27.904922] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.895 [2024-05-15 07:04:27.905293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.905514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.905542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.895 [2024-05-15 07:04:27.905559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.895 [2024-05-15 07:04:27.905707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.895 [2024-05-15 07:04:27.905894] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.895 [2024-05-15 07:04:27.905918] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.895 [2024-05-15 07:04:27.905947] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.895 [2024-05-15 07:04:27.908399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.895 [2024-05-15 07:04:27.917616] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.895 [2024-05-15 07:04:27.918057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.918252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.918280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.895 [2024-05-15 07:04:27.918297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.895 [2024-05-15 07:04:27.918480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.895 [2024-05-15 07:04:27.918649] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.895 [2024-05-15 07:04:27.918672] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.895 [2024-05-15 07:04:27.918688] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.895 [2024-05-15 07:04:27.921030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.895 [2024-05-15 07:04:27.930381] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.895 [2024-05-15 07:04:27.931006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.931218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.931261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.895 [2024-05-15 07:04:27.931279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.895 [2024-05-15 07:04:27.931460] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.895 [2024-05-15 07:04:27.931612] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.895 [2024-05-15 07:04:27.931635] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.895 [2024-05-15 07:04:27.931650] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.895 [2024-05-15 07:04:27.934081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.895 [2024-05-15 07:04:27.943084] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.895 [2024-05-15 07:04:27.943512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.943707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.943735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.895 [2024-05-15 07:04:27.943752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.895 [2024-05-15 07:04:27.943918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.895 [2024-05-15 07:04:27.944110] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.895 [2024-05-15 07:04:27.944131] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.895 [2024-05-15 07:04:27.944145] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.895 [2024-05-15 07:04:27.946515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.895 [2024-05-15 07:04:27.955498] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.895 [2024-05-15 07:04:27.955919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.956104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.956131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.895 [2024-05-15 07:04:27.956146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.895 [2024-05-15 07:04:27.956343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.895 [2024-05-15 07:04:27.956518] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.895 [2024-05-15 07:04:27.956543] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.895 [2024-05-15 07:04:27.956559] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.895 [2024-05-15 07:04:27.958838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.895 [2024-05-15 07:04:27.968197] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.895 [2024-05-15 07:04:27.968584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.968832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.968860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.895 [2024-05-15 07:04:27.968878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.895 [2024-05-15 07:04:27.969090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.895 [2024-05-15 07:04:27.969249] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.895 [2024-05-15 07:04:27.969272] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.895 [2024-05-15 07:04:27.969287] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.895 [2024-05-15 07:04:27.971607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.895 [2024-05-15 07:04:27.980768] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.895 [2024-05-15 07:04:27.981125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.981296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.981321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.895 [2024-05-15 07:04:27.981337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.895 [2024-05-15 07:04:27.981557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.895 [2024-05-15 07:04:27.981762] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.895 [2024-05-15 07:04:27.981785] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.895 [2024-05-15 07:04:27.981800] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.895 [2024-05-15 07:04:27.984184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.895 [2024-05-15 07:04:27.993192] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.895 [2024-05-15 07:04:27.993583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.994008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.895 [2024-05-15 07:04:27.994038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.895 [2024-05-15 07:04:27.994056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.895 [2024-05-15 07:04:27.994222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.895 [2024-05-15 07:04:27.994409] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.895 [2024-05-15 07:04:27.994431] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.895 [2024-05-15 07:04:27.994447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.895 [2024-05-15 07:04:27.996836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.895 [2024-05-15 07:04:28.005856] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.895 [2024-05-15 07:04:28.006232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.006458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.006483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.896 [2024-05-15 07:04:28.006499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.896 [2024-05-15 07:04:28.006646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.896 [2024-05-15 07:04:28.006798] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.896 [2024-05-15 07:04:28.006826] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.896 [2024-05-15 07:04:28.006842] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.896 [2024-05-15 07:04:28.009243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.896 [2024-05-15 07:04:28.018437] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.896 [2024-05-15 07:04:28.018887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.019163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.019192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.896 [2024-05-15 07:04:28.019210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.896 [2024-05-15 07:04:28.019412] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.896 [2024-05-15 07:04:28.019616] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.896 [2024-05-15 07:04:28.019640] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.896 [2024-05-15 07:04:28.019655] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.896 [2024-05-15 07:04:28.022040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.896 [2024-05-15 07:04:28.030995] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.896 [2024-05-15 07:04:28.031407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.031646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.031674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.896 [2024-05-15 07:04:28.031692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.896 [2024-05-15 07:04:28.031858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.896 [2024-05-15 07:04:28.032038] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.896 [2024-05-15 07:04:28.032062] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.896 [2024-05-15 07:04:28.032077] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.896 [2024-05-15 07:04:28.034497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.896 [2024-05-15 07:04:28.043657] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.896 [2024-05-15 07:04:28.044039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.044429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.044477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.896 [2024-05-15 07:04:28.044495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.896 [2024-05-15 07:04:28.044643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.896 [2024-05-15 07:04:28.044758] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.896 [2024-05-15 07:04:28.044780] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.896 [2024-05-15 07:04:28.044801] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.896 [2024-05-15 07:04:28.047243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.896 [2024-05-15 07:04:28.056232] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.896 [2024-05-15 07:04:28.056625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.056819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.056844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.896 [2024-05-15 07:04:28.056860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.896 [2024-05-15 07:04:28.057004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.896 [2024-05-15 07:04:28.057205] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.896 [2024-05-15 07:04:28.057228] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.896 [2024-05-15 07:04:28.057244] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.896 [2024-05-15 07:04:28.059546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.896 [2024-05-15 07:04:28.068590] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.896 [2024-05-15 07:04:28.069004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.069191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.069219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.896 [2024-05-15 07:04:28.069236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.896 [2024-05-15 07:04:28.069383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.896 [2024-05-15 07:04:28.069534] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.896 [2024-05-15 07:04:28.069556] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.896 [2024-05-15 07:04:28.069572] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.896 [2024-05-15 07:04:28.072084] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.896 [2024-05-15 07:04:28.081244] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.896 [2024-05-15 07:04:28.081689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.081893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.081920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.896 [2024-05-15 07:04:28.081946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.896 [2024-05-15 07:04:28.082165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.896 [2024-05-15 07:04:28.082334] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.896 [2024-05-15 07:04:28.082357] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.896 [2024-05-15 07:04:28.082372] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.896 [2024-05-15 07:04:28.084628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.896 [2024-05-15 07:04:28.093788] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.896 [2024-05-15 07:04:28.094183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.094441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.094468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.896 [2024-05-15 07:04:28.094486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.896 [2024-05-15 07:04:28.094633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.896 [2024-05-15 07:04:28.094820] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.896 [2024-05-15 07:04:28.094843] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.896 [2024-05-15 07:04:28.094858] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.896 [2024-05-15 07:04:28.097350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.896 [2024-05-15 07:04:28.106406] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.896 [2024-05-15 07:04:28.106828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.107065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.896 [2024-05-15 07:04:28.107096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.896 [2024-05-15 07:04:28.107114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.896 [2024-05-15 07:04:28.107298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.896 [2024-05-15 07:04:28.107430] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.896 [2024-05-15 07:04:28.107453] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.896 [2024-05-15 07:04:28.107468] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.896 [2024-05-15 07:04:28.109681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:13.897 [2024-05-15 07:04:28.119010] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:13.897 [2024-05-15 07:04:28.119399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.897 [2024-05-15 07:04:28.119699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.897 [2024-05-15 07:04:28.119727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:13.897 [2024-05-15 07:04:28.119744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:13.897 [2024-05-15 07:04:28.119927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:13.897 [2024-05-15 07:04:28.120052] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:13.897 [2024-05-15 07:04:28.120075] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:13.897 [2024-05-15 07:04:28.120090] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:13.897 [2024-05-15 07:04:28.122520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.157 [2024-05-15 07:04:28.131606] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.157 [2024-05-15 07:04:28.132019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.157 [2024-05-15 07:04:28.132244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.157 [2024-05-15 07:04:28.132272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.157 [2024-05-15 07:04:28.132290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.158 [2024-05-15 07:04:28.132438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.158 [2024-05-15 07:04:28.132588] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.158 [2024-05-15 07:04:28.132611] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.158 [2024-05-15 07:04:28.132627] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.158 [2024-05-15 07:04:28.135114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.158 [2024-05-15 07:04:28.144064] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.158 [2024-05-15 07:04:28.144428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.144826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.144885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.158 [2024-05-15 07:04:28.144903] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.158 [2024-05-15 07:04:28.145097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.158 [2024-05-15 07:04:28.145249] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.158 [2024-05-15 07:04:28.145272] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.158 [2024-05-15 07:04:28.145288] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.158 [2024-05-15 07:04:28.147645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.158 [2024-05-15 07:04:28.156799] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.158 [2024-05-15 07:04:28.157215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.157549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.157616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.158 [2024-05-15 07:04:28.157632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.158 [2024-05-15 07:04:28.157816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.158 [2024-05-15 07:04:28.158032] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.158 [2024-05-15 07:04:28.158056] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.158 [2024-05-15 07:04:28.158072] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.158 [2024-05-15 07:04:28.160323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.158 [2024-05-15 07:04:28.169362] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.158 [2024-05-15 07:04:28.169750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.170006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.170032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.158 [2024-05-15 07:04:28.170048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.158 [2024-05-15 07:04:28.170179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.158 [2024-05-15 07:04:28.170382] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.158 [2024-05-15 07:04:28.170406] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.158 [2024-05-15 07:04:28.170422] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.158 [2024-05-15 07:04:28.172551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.158 [2024-05-15 07:04:28.181903] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.158 [2024-05-15 07:04:28.182321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.182597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.182660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.158 [2024-05-15 07:04:28.182678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.158 [2024-05-15 07:04:28.182844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.158 [2024-05-15 07:04:28.183041] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.158 [2024-05-15 07:04:28.183065] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.158 [2024-05-15 07:04:28.183081] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.158 [2024-05-15 07:04:28.185436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.158 [2024-05-15 07:04:28.194552] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.158 [2024-05-15 07:04:28.194888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.195137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.195166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.158 [2024-05-15 07:04:28.195183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.158 [2024-05-15 07:04:28.195367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.158 [2024-05-15 07:04:28.195553] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.158 [2024-05-15 07:04:28.195575] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.158 [2024-05-15 07:04:28.195591] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.158 [2024-05-15 07:04:28.198008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.158 [2024-05-15 07:04:28.207253] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.158 [2024-05-15 07:04:28.207852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.208097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.208126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.158 [2024-05-15 07:04:28.208149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.158 [2024-05-15 07:04:28.208351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.158 [2024-05-15 07:04:28.208520] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.158 [2024-05-15 07:04:28.208543] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.158 [2024-05-15 07:04:28.208558] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.158 [2024-05-15 07:04:28.211053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.158 [2024-05-15 07:04:28.219769] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.158 [2024-05-15 07:04:28.220206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.220475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.220500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.158 [2024-05-15 07:04:28.220516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.158 [2024-05-15 07:04:28.220691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.158 [2024-05-15 07:04:28.220879] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.158 [2024-05-15 07:04:28.220901] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.158 [2024-05-15 07:04:28.220917] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.158 [2024-05-15 07:04:28.223104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.158 [2024-05-15 07:04:28.232330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.158 [2024-05-15 07:04:28.232704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.232972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.233001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.158 [2024-05-15 07:04:28.233019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.158 [2024-05-15 07:04:28.233203] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.158 [2024-05-15 07:04:28.233371] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.158 [2024-05-15 07:04:28.233394] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.158 [2024-05-15 07:04:28.233409] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.158 [2024-05-15 07:04:28.235791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.158 [2024-05-15 07:04:28.245059] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.158 [2024-05-15 07:04:28.245469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.158 [2024-05-15 07:04:28.245843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.245897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.159 [2024-05-15 07:04:28.245914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.159 [2024-05-15 07:04:28.246059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.159 [2024-05-15 07:04:28.246299] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.159 [2024-05-15 07:04:28.246322] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.159 [2024-05-15 07:04:28.246338] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.159 [2024-05-15 07:04:28.248551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.159 [2024-05-15 07:04:28.257667] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.159 [2024-05-15 07:04:28.258167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.258407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.258435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.159 [2024-05-15 07:04:28.258452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.159 [2024-05-15 07:04:28.258581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.159 [2024-05-15 07:04:28.258750] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.159 [2024-05-15 07:04:28.258772] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.159 [2024-05-15 07:04:28.258789] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.159 [2024-05-15 07:04:28.261172] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.159 [2024-05-15 07:04:28.270134] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.159 [2024-05-15 07:04:28.270513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.270850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.270912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.159 [2024-05-15 07:04:28.270938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.159 [2024-05-15 07:04:28.271143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.159 [2024-05-15 07:04:28.271347] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.159 [2024-05-15 07:04:28.271370] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.159 [2024-05-15 07:04:28.271386] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.159 [2024-05-15 07:04:28.273763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.159 [2024-05-15 07:04:28.282556] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.159 [2024-05-15 07:04:28.283023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.283228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.283253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.159 [2024-05-15 07:04:28.283269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.159 [2024-05-15 07:04:28.283423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.159 [2024-05-15 07:04:28.283598] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.159 [2024-05-15 07:04:28.283621] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.159 [2024-05-15 07:04:28.283637] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.159 [2024-05-15 07:04:28.286000] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.159 [2024-05-15 07:04:28.295105] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.159 [2024-05-15 07:04:28.295557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.295821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.295849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.159 [2024-05-15 07:04:28.295866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.159 [2024-05-15 07:04:28.296024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.159 [2024-05-15 07:04:28.296176] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.159 [2024-05-15 07:04:28.296199] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.159 [2024-05-15 07:04:28.296215] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.159 [2024-05-15 07:04:28.298519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.159 [2024-05-15 07:04:28.307538] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.159 [2024-05-15 07:04:28.307966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.308193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.308221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.159 [2024-05-15 07:04:28.308238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.159 [2024-05-15 07:04:28.308457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.159 [2024-05-15 07:04:28.308626] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.159 [2024-05-15 07:04:28.308649] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.159 [2024-05-15 07:04:28.308665] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.159 [2024-05-15 07:04:28.310959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.159 [2024-05-15 07:04:28.320235] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.159 [2024-05-15 07:04:28.320720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.320944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.320973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.159 [2024-05-15 07:04:28.320991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.159 [2024-05-15 07:04:28.321157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.159 [2024-05-15 07:04:28.321326] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.159 [2024-05-15 07:04:28.321354] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.159 [2024-05-15 07:04:28.321370] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.159 [2024-05-15 07:04:28.323584] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.159 [2024-05-15 07:04:28.332907] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.159 [2024-05-15 07:04:28.333305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.333612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.333640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.159 [2024-05-15 07:04:28.333658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.159 [2024-05-15 07:04:28.333878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.159 [2024-05-15 07:04:28.334095] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.159 [2024-05-15 07:04:28.334120] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.159 [2024-05-15 07:04:28.334135] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.159 [2024-05-15 07:04:28.336492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.159 [2024-05-15 07:04:28.345332] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.159 [2024-05-15 07:04:28.345703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.345957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.159 [2024-05-15 07:04:28.345986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.159 [2024-05-15 07:04:28.346004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.159 [2024-05-15 07:04:28.346187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.159 [2024-05-15 07:04:28.346338] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.160 [2024-05-15 07:04:28.346360] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.160 [2024-05-15 07:04:28.346376] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.160 [2024-05-15 07:04:28.348840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.160 [2024-05-15 07:04:28.358148] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.160 [2024-05-15 07:04:28.358534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.160 [2024-05-15 07:04:28.358764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.160 [2024-05-15 07:04:28.358790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.160 [2024-05-15 07:04:28.358823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.160 [2024-05-15 07:04:28.359035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.160 [2024-05-15 07:04:28.359187] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.160 [2024-05-15 07:04:28.359214] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.160 [2024-05-15 07:04:28.359235] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.160 [2024-05-15 07:04:28.361559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.160 [2024-05-15 07:04:28.370633] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.160 [2024-05-15 07:04:28.371100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.160 [2024-05-15 07:04:28.371300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.160 [2024-05-15 07:04:28.371328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.160 [2024-05-15 07:04:28.371345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.160 [2024-05-15 07:04:28.371511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.160 [2024-05-15 07:04:28.371697] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.160 [2024-05-15 07:04:28.371720] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.160 [2024-05-15 07:04:28.371736] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.160 [2024-05-15 07:04:28.373940] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.160 [2024-05-15 07:04:28.383161] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.160 [2024-05-15 07:04:28.383594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.160 [2024-05-15 07:04:28.384028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.160 [2024-05-15 07:04:28.384058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.160 [2024-05-15 07:04:28.384075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.160 [2024-05-15 07:04:28.384205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.160 [2024-05-15 07:04:28.384373] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.160 [2024-05-15 07:04:28.384395] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.160 [2024-05-15 07:04:28.384411] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.160 [2024-05-15 07:04:28.386876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.420 [2024-05-15 07:04:28.395739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.420 [2024-05-15 07:04:28.396165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-05-15 07:04:28.396423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-05-15 07:04:28.396452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.420 [2024-05-15 07:04:28.396469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.420 [2024-05-15 07:04:28.396617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.420 [2024-05-15 07:04:28.396786] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.420 [2024-05-15 07:04:28.396809] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.420 [2024-05-15 07:04:28.396825] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.420 [2024-05-15 07:04:28.399107] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.420 [2024-05-15 07:04:28.408304] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.420 [2024-05-15 07:04:28.408730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-05-15 07:04:28.408948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-05-15 07:04:28.408975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.420 [2024-05-15 07:04:28.408990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.420 [2024-05-15 07:04:28.409169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.420 [2024-05-15 07:04:28.409302] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.420 [2024-05-15 07:04:28.409326] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.420 [2024-05-15 07:04:28.409341] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.420 [2024-05-15 07:04:28.411716] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.420 [2024-05-15 07:04:28.421002] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.420 [2024-05-15 07:04:28.421637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-05-15 07:04:28.421887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.420 [2024-05-15 07:04:28.421912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.420 [2024-05-15 07:04:28.421928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.420 [2024-05-15 07:04:28.422156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.420 [2024-05-15 07:04:28.422307] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.420 [2024-05-15 07:04:28.422330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.420 [2024-05-15 07:04:28.422346] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.420 [2024-05-15 07:04:28.424665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.421 [2024-05-15 07:04:28.433264] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.421 [2024-05-15 07:04:28.433724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.433973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.434010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.421 [2024-05-15 07:04:28.434028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.421 [2024-05-15 07:04:28.434176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.421 [2024-05-15 07:04:28.434291] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.421 [2024-05-15 07:04:28.434314] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.421 [2024-05-15 07:04:28.434329] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.421 [2024-05-15 07:04:28.436622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.421 [2024-05-15 07:04:28.446149] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.421 [2024-05-15 07:04:28.446627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.446877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.446905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.421 [2024-05-15 07:04:28.446922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.421 [2024-05-15 07:04:28.447079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.421 [2024-05-15 07:04:28.447266] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.421 [2024-05-15 07:04:28.447289] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.421 [2024-05-15 07:04:28.447304] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.421 [2024-05-15 07:04:28.449606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.421 [2024-05-15 07:04:28.458749] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.421 [2024-05-15 07:04:28.459153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.459408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.459435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.421 [2024-05-15 07:04:28.459453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.421 [2024-05-15 07:04:28.459655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.421 [2024-05-15 07:04:28.459806] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.421 [2024-05-15 07:04:28.459829] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.421 [2024-05-15 07:04:28.459844] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.421 [2024-05-15 07:04:28.462013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.421 [2024-05-15 07:04:28.471464] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.421 [2024-05-15 07:04:28.471944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.472167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.472195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.421 [2024-05-15 07:04:28.472213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.421 [2024-05-15 07:04:28.472396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.421 [2024-05-15 07:04:28.472565] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.421 [2024-05-15 07:04:28.472588] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.421 [2024-05-15 07:04:28.472603] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.421 [2024-05-15 07:04:28.474858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.421 [2024-05-15 07:04:28.483972] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.421 [2024-05-15 07:04:28.484349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.484577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.484623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.421 [2024-05-15 07:04:28.484641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.421 [2024-05-15 07:04:28.484789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.421 [2024-05-15 07:04:28.484971] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.421 [2024-05-15 07:04:28.484995] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.421 [2024-05-15 07:04:28.485010] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.421 [2024-05-15 07:04:28.487327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.421 [2024-05-15 07:04:28.496611] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.421 [2024-05-15 07:04:28.497051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.497279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.497307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.421 [2024-05-15 07:04:28.497325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.421 [2024-05-15 07:04:28.497508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.421 [2024-05-15 07:04:28.497659] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.421 [2024-05-15 07:04:28.497681] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.421 [2024-05-15 07:04:28.497697] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.421 [2024-05-15 07:04:28.500045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.421 [2024-05-15 07:04:28.509236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.421 [2024-05-15 07:04:28.509721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.509945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.509975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.421 [2024-05-15 07:04:28.509993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.421 [2024-05-15 07:04:28.510158] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.421 [2024-05-15 07:04:28.510363] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.421 [2024-05-15 07:04:28.510387] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.421 [2024-05-15 07:04:28.510403] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.421 [2024-05-15 07:04:28.512671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.421 [2024-05-15 07:04:28.521779] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.421 [2024-05-15 07:04:28.522227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.522456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.522489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.421 [2024-05-15 07:04:28.522508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.421 [2024-05-15 07:04:28.522673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.421 [2024-05-15 07:04:28.522807] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.421 [2024-05-15 07:04:28.522829] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.421 [2024-05-15 07:04:28.522844] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.421 [2024-05-15 07:04:28.525140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.421 [2024-05-15 07:04:28.534397] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.421 [2024-05-15 07:04:28.534885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.535124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.421 [2024-05-15 07:04:28.535150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.421 [2024-05-15 07:04:28.535165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.421 [2024-05-15 07:04:28.535324] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.421 [2024-05-15 07:04:28.535511] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.422 [2024-05-15 07:04:28.535534] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.422 [2024-05-15 07:04:28.535549] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.422 [2024-05-15 07:04:28.537782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.422 [2024-05-15 07:04:28.547127] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.422 [2024-05-15 07:04:28.547499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.547710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.547738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.422 [2024-05-15 07:04:28.547755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.422 [2024-05-15 07:04:28.547949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.422 [2024-05-15 07:04:28.548119] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.422 [2024-05-15 07:04:28.548142] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.422 [2024-05-15 07:04:28.548157] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.422 [2024-05-15 07:04:28.550496] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.422 [2024-05-15 07:04:28.559814] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.422 [2024-05-15 07:04:28.560227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.560505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.560533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.422 [2024-05-15 07:04:28.560555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.422 [2024-05-15 07:04:28.560722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.422 [2024-05-15 07:04:28.560909] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.422 [2024-05-15 07:04:28.560941] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.422 [2024-05-15 07:04:28.560959] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.422 [2024-05-15 07:04:28.563210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.422 [2024-05-15 07:04:28.572255] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.422 [2024-05-15 07:04:28.572751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.573003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.573029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.422 [2024-05-15 07:04:28.573044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.422 [2024-05-15 07:04:28.573180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.422 [2024-05-15 07:04:28.573331] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.422 [2024-05-15 07:04:28.573353] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.422 [2024-05-15 07:04:28.573368] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.422 [2024-05-15 07:04:28.575656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.422 [2024-05-15 07:04:28.584858] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.422 [2024-05-15 07:04:28.585333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.585590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.585618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.422 [2024-05-15 07:04:28.585636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.422 [2024-05-15 07:04:28.585837] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.422 [2024-05-15 07:04:28.585999] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.422 [2024-05-15 07:04:28.586023] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.422 [2024-05-15 07:04:28.586039] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.422 [2024-05-15 07:04:28.588415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.422 [2024-05-15 07:04:28.597236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.422 [2024-05-15 07:04:28.597917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.598127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.598155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.422 [2024-05-15 07:04:28.598172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.422 [2024-05-15 07:04:28.598343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.422 [2024-05-15 07:04:28.598530] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.422 [2024-05-15 07:04:28.598553] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.422 [2024-05-15 07:04:28.598568] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.422 [2024-05-15 07:04:28.600821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.422 [2024-05-15 07:04:28.609700] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.422 [2024-05-15 07:04:28.610179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.610644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.610695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.422 [2024-05-15 07:04:28.610713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.422 [2024-05-15 07:04:28.610896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.422 [2024-05-15 07:04:28.611038] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.422 [2024-05-15 07:04:28.611062] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.422 [2024-05-15 07:04:28.611077] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.422 [2024-05-15 07:04:28.613474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.422 [2024-05-15 07:04:28.622334] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.422 [2024-05-15 07:04:28.622735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.622988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.623017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.422 [2024-05-15 07:04:28.623035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.422 [2024-05-15 07:04:28.623201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.422 [2024-05-15 07:04:28.623334] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.422 [2024-05-15 07:04:28.623357] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.422 [2024-05-15 07:04:28.623372] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.422 [2024-05-15 07:04:28.625638] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.422 [2024-05-15 07:04:28.634953] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.422 [2024-05-15 07:04:28.635353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.635564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.635592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.422 [2024-05-15 07:04:28.635610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.422 [2024-05-15 07:04:28.635792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.422 [2024-05-15 07:04:28.635960] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.422 [2024-05-15 07:04:28.635984] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.422 [2024-05-15 07:04:28.635999] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.422 [2024-05-15 07:04:28.638391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.422 [2024-05-15 07:04:28.647414] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.422 [2024-05-15 07:04:28.647875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.648156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.422 [2024-05-15 07:04:28.648185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.422 [2024-05-15 07:04:28.648202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.423 [2024-05-15 07:04:28.648368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.423 [2024-05-15 07:04:28.648518] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.423 [2024-05-15 07:04:28.648541] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.423 [2024-05-15 07:04:28.648556] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.423 [2024-05-15 07:04:28.650696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.682 [2024-05-15 07:04:28.659960] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.682 [2024-05-15 07:04:28.660352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.660577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.660608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.682 [2024-05-15 07:04:28.660626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.682 [2024-05-15 07:04:28.660810] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.682 [2024-05-15 07:04:28.660993] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.682 [2024-05-15 07:04:28.661018] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.682 [2024-05-15 07:04:28.661033] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.682 [2024-05-15 07:04:28.663192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.682 [2024-05-15 07:04:28.672627] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.682 [2024-05-15 07:04:28.673030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.673397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.673457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.682 [2024-05-15 07:04:28.673475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.682 [2024-05-15 07:04:28.673659] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.682 [2024-05-15 07:04:28.673828] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.682 [2024-05-15 07:04:28.673858] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.682 [2024-05-15 07:04:28.673874] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.682 [2024-05-15 07:04:28.676277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.682 [2024-05-15 07:04:28.685247] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.682 [2024-05-15 07:04:28.685652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.685878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.685906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.682 [2024-05-15 07:04:28.685923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.682 [2024-05-15 07:04:28.686082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.682 [2024-05-15 07:04:28.686233] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.682 [2024-05-15 07:04:28.686256] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.682 [2024-05-15 07:04:28.686271] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.682 [2024-05-15 07:04:28.688627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.682 [2024-05-15 07:04:28.697740] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.682 [2024-05-15 07:04:28.698167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.698399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.698427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.682 [2024-05-15 07:04:28.698444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.682 [2024-05-15 07:04:28.698610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.682 [2024-05-15 07:04:28.698778] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.682 [2024-05-15 07:04:28.698800] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.682 [2024-05-15 07:04:28.698816] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.682 [2024-05-15 07:04:28.701038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.682 [2024-05-15 07:04:28.710245] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.682 [2024-05-15 07:04:28.710630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.710838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.710867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.682 [2024-05-15 07:04:28.710885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.682 [2024-05-15 07:04:28.711060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.682 [2024-05-15 07:04:28.711195] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.682 [2024-05-15 07:04:28.711217] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.682 [2024-05-15 07:04:28.711238] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.682 [2024-05-15 07:04:28.713688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.682 [2024-05-15 07:04:28.722777] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.682 [2024-05-15 07:04:28.723192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.723491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.723530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.682 [2024-05-15 07:04:28.723546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.682 [2024-05-15 07:04:28.723707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.682 [2024-05-15 07:04:28.723875] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.682 [2024-05-15 07:04:28.723898] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.682 [2024-05-15 07:04:28.723913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.682 [2024-05-15 07:04:28.726336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.682 [2024-05-15 07:04:28.735481] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.682 [2024-05-15 07:04:28.735825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.736074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.736103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.682 [2024-05-15 07:04:28.736120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.682 [2024-05-15 07:04:28.736304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.682 [2024-05-15 07:04:28.736437] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.682 [2024-05-15 07:04:28.736459] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.682 [2024-05-15 07:04:28.736474] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.682 [2024-05-15 07:04:28.738798] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.682 [2024-05-15 07:04:28.748153] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.682 [2024-05-15 07:04:28.748541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.748744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.748772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.682 [2024-05-15 07:04:28.748789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.682 [2024-05-15 07:04:28.748985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.682 [2024-05-15 07:04:28.749155] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.682 [2024-05-15 07:04:28.749178] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.682 [2024-05-15 07:04:28.749194] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.682 [2024-05-15 07:04:28.751666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.682 [2024-05-15 07:04:28.760742] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.682 [2024-05-15 07:04:28.761143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.761331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.682 [2024-05-15 07:04:28.761359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.682 [2024-05-15 07:04:28.761377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.683 [2024-05-15 07:04:28.761543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.683 [2024-05-15 07:04:28.761693] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.683 [2024-05-15 07:04:28.761716] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.683 [2024-05-15 07:04:28.761731] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.683 [2024-05-15 07:04:28.764210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.683 [2024-05-15 07:04:28.773490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.683 [2024-05-15 07:04:28.773876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.774076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.774105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.683 [2024-05-15 07:04:28.774123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.683 [2024-05-15 07:04:28.774307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.683 [2024-05-15 07:04:28.774475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.683 [2024-05-15 07:04:28.774498] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.683 [2024-05-15 07:04:28.774514] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.683 [2024-05-15 07:04:28.776764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.683 [2024-05-15 07:04:28.786168] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.683 [2024-05-15 07:04:28.786585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.787006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.787035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.683 [2024-05-15 07:04:28.787053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.683 [2024-05-15 07:04:28.787255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.683 [2024-05-15 07:04:28.787423] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.683 [2024-05-15 07:04:28.787446] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.683 [2024-05-15 07:04:28.787461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.683 [2024-05-15 07:04:28.789676] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.683 [2024-05-15 07:04:28.798695] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.683 [2024-05-15 07:04:28.799103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.799381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.799433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.683 [2024-05-15 07:04:28.799451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.683 [2024-05-15 07:04:28.799652] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.683 [2024-05-15 07:04:28.799821] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.683 [2024-05-15 07:04:28.799843] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.683 [2024-05-15 07:04:28.799859] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.683 [2024-05-15 07:04:28.802120] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.683 [2024-05-15 07:04:28.811257] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.683 [2024-05-15 07:04:28.811631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.811948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.811977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.683 [2024-05-15 07:04:28.811995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.683 [2024-05-15 07:04:28.812125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.683 [2024-05-15 07:04:28.812276] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.683 [2024-05-15 07:04:28.812298] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.683 [2024-05-15 07:04:28.812314] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.683 [2024-05-15 07:04:28.814764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.683 [2024-05-15 07:04:28.823910] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.683 [2024-05-15 07:04:28.824310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.824562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.824587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.683 [2024-05-15 07:04:28.824603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.683 [2024-05-15 07:04:28.824773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.683 [2024-05-15 07:04:28.824990] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.683 [2024-05-15 07:04:28.825014] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.683 [2024-05-15 07:04:28.825029] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.683 [2024-05-15 07:04:28.827352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.683 [2024-05-15 07:04:28.836631] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.683 [2024-05-15 07:04:28.837086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.837443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.837498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.683 [2024-05-15 07:04:28.837516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.683 [2024-05-15 07:04:28.837717] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.683 [2024-05-15 07:04:28.837922] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.683 [2024-05-15 07:04:28.837954] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.683 [2024-05-15 07:04:28.837970] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.683 [2024-05-15 07:04:28.840236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.683 [2024-05-15 07:04:28.849235] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.683 [2024-05-15 07:04:28.849665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.849892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.849918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.683 [2024-05-15 07:04:28.849942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.683 [2024-05-15 07:04:28.850135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.683 [2024-05-15 07:04:28.850340] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.683 [2024-05-15 07:04:28.850363] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.683 [2024-05-15 07:04:28.850379] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.683 [2024-05-15 07:04:28.852701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.683 [2024-05-15 07:04:28.861829] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.683 [2024-05-15 07:04:28.862245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.862609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.862665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.683 [2024-05-15 07:04:28.862683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.683 [2024-05-15 07:04:28.862848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.683 [2024-05-15 07:04:28.863011] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.683 [2024-05-15 07:04:28.863035] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.683 [2024-05-15 07:04:28.863050] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.683 [2024-05-15 07:04:28.865427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.683 [2024-05-15 07:04:28.874426] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.683 [2024-05-15 07:04:28.874811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.875063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.683 [2024-05-15 07:04:28.875098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.684 [2024-05-15 07:04:28.875117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.684 [2024-05-15 07:04:28.875301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.684 [2024-05-15 07:04:28.875452] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.684 [2024-05-15 07:04:28.875474] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.684 [2024-05-15 07:04:28.875490] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.684 [2024-05-15 07:04:28.877829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.684 [2024-05-15 07:04:28.886906] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.684 [2024-05-15 07:04:28.887329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.684 [2024-05-15 07:04:28.887578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.684 [2024-05-15 07:04:28.887606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.684 [2024-05-15 07:04:28.887624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.684 [2024-05-15 07:04:28.887807] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.684 [2024-05-15 07:04:28.888006] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.684 [2024-05-15 07:04:28.888030] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.684 [2024-05-15 07:04:28.888045] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.684 [2024-05-15 07:04:28.890422] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.684 [2024-05-15 07:04:28.899609] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.684 [2024-05-15 07:04:28.900055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.684 [2024-05-15 07:04:28.900251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.684 [2024-05-15 07:04:28.900276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.684 [2024-05-15 07:04:28.900292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.684 [2024-05-15 07:04:28.900446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.684 [2024-05-15 07:04:28.900615] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.684 [2024-05-15 07:04:28.900637] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.684 [2024-05-15 07:04:28.900653] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.684 [2024-05-15 07:04:28.902962] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.684 [2024-05-15 07:04:28.912172] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.684 [2024-05-15 07:04:28.912574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.684 [2024-05-15 07:04:28.912991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.684 [2024-05-15 07:04:28.913020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.684 [2024-05-15 07:04:28.913043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.684 [2024-05-15 07:04:28.913246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.684 [2024-05-15 07:04:28.913468] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.684 [2024-05-15 07:04:28.913491] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.684 [2024-05-15 07:04:28.913506] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.943 [2024-05-15 07:04:28.915765] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.943 [2024-05-15 07:04:28.924708] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.943 [2024-05-15 07:04:28.925077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.925304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.925329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.943 [2024-05-15 07:04:28.925345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.943 [2024-05-15 07:04:28.925511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.943 [2024-05-15 07:04:28.925672] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.943 [2024-05-15 07:04:28.925695] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.943 [2024-05-15 07:04:28.925711] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.943 [2024-05-15 07:04:28.928192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.943 [2024-05-15 07:04:28.937355] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.943 [2024-05-15 07:04:28.937834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.938085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.938115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.943 [2024-05-15 07:04:28.938132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.943 [2024-05-15 07:04:28.938315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.943 [2024-05-15 07:04:28.938484] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.943 [2024-05-15 07:04:28.938507] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.943 [2024-05-15 07:04:28.938522] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.943 [2024-05-15 07:04:28.941029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.943 [2024-05-15 07:04:28.949983] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.943 [2024-05-15 07:04:28.950431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.950711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.950741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.943 [2024-05-15 07:04:28.950759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.943 [2024-05-15 07:04:28.950895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.943 [2024-05-15 07:04:28.951092] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.943 [2024-05-15 07:04:28.951116] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.943 [2024-05-15 07:04:28.951131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.943 [2024-05-15 07:04:28.953703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.943 [2024-05-15 07:04:28.962494] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.943 [2024-05-15 07:04:28.962915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.963130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.963155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.943 [2024-05-15 07:04:28.963189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.943 [2024-05-15 07:04:28.963355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.943 [2024-05-15 07:04:28.963560] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.943 [2024-05-15 07:04:28.963583] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.943 [2024-05-15 07:04:28.963598] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.943 [2024-05-15 07:04:28.965885] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.943 [2024-05-15 07:04:28.975044] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.943 [2024-05-15 07:04:28.975528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.975750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.975778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.943 [2024-05-15 07:04:28.975795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.943 [2024-05-15 07:04:28.975952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.943 [2024-05-15 07:04:28.976122] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.943 [2024-05-15 07:04:28.976144] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.943 [2024-05-15 07:04:28.976160] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.943 [2024-05-15 07:04:28.978515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.943 [2024-05-15 07:04:28.987684] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.943 [2024-05-15 07:04:28.988084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.988314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.943 [2024-05-15 07:04:28.988339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.943 [2024-05-15 07:04:28.988355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.943 [2024-05-15 07:04:28.988521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.943 [2024-05-15 07:04:28.988713] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.943 [2024-05-15 07:04:28.988736] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.943 [2024-05-15 07:04:28.988752] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.943 [2024-05-15 07:04:28.990960] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.943 [2024-05-15 07:04:29.000087] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.944 [2024-05-15 07:04:29.000504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.000897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.000960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.944 [2024-05-15 07:04:29.000985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.944 [2024-05-15 07:04:29.001187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.944 [2024-05-15 07:04:29.001339] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.944 [2024-05-15 07:04:29.001361] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.944 [2024-05-15 07:04:29.001378] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.944 [2024-05-15 07:04:29.003560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.944 [2024-05-15 07:04:29.012749] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.944 [2024-05-15 07:04:29.013140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.013391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.013419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.944 [2024-05-15 07:04:29.013437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.944 [2024-05-15 07:04:29.013586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.944 [2024-05-15 07:04:29.013791] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.944 [2024-05-15 07:04:29.013814] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.944 [2024-05-15 07:04:29.013830] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.944 [2024-05-15 07:04:29.016019] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.944 [2024-05-15 07:04:29.025517] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.944 [2024-05-15 07:04:29.025900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.026136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.026165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.944 [2024-05-15 07:04:29.026182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.944 [2024-05-15 07:04:29.026330] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.944 [2024-05-15 07:04:29.026516] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.944 [2024-05-15 07:04:29.026540] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.944 [2024-05-15 07:04:29.026561] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.944 [2024-05-15 07:04:29.029037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.944 [2024-05-15 07:04:29.038079] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.944 [2024-05-15 07:04:29.038496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.038724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.038749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.944 [2024-05-15 07:04:29.038764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.944 [2024-05-15 07:04:29.038878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.944 [2024-05-15 07:04:29.039118] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.944 [2024-05-15 07:04:29.039143] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.944 [2024-05-15 07:04:29.039159] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.944 [2024-05-15 07:04:29.041516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.944 [2024-05-15 07:04:29.050965] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.944 [2024-05-15 07:04:29.051384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.051607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.051635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.944 [2024-05-15 07:04:29.051652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.944 [2024-05-15 07:04:29.051781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.944 [2024-05-15 07:04:29.051959] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.944 [2024-05-15 07:04:29.051983] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.944 [2024-05-15 07:04:29.051999] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.944 [2024-05-15 07:04:29.054199] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.944 [2024-05-15 07:04:29.063490] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.944 [2024-05-15 07:04:29.063900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.064146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.064175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.944 [2024-05-15 07:04:29.064192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.944 [2024-05-15 07:04:29.064358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.944 [2024-05-15 07:04:29.064526] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.944 [2024-05-15 07:04:29.064549] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.944 [2024-05-15 07:04:29.064565] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.944 [2024-05-15 07:04:29.066859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.944 [2024-05-15 07:04:29.075975] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.944 [2024-05-15 07:04:29.076374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.076596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.076621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.944 [2024-05-15 07:04:29.076637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.944 [2024-05-15 07:04:29.076756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.944 [2024-05-15 07:04:29.076970] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.944 [2024-05-15 07:04:29.076995] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.944 [2024-05-15 07:04:29.077011] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.944 [2024-05-15 07:04:29.079348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.944 [2024-05-15 07:04:29.088731] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.944 [2024-05-15 07:04:29.089160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.089367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.089397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.944 [2024-05-15 07:04:29.089415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.944 [2024-05-15 07:04:29.089599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.944 [2024-05-15 07:04:29.089787] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.944 [2024-05-15 07:04:29.089810] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.944 [2024-05-15 07:04:29.089826] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.944 [2024-05-15 07:04:29.092085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.944 [2024-05-15 07:04:29.101424] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.944 [2024-05-15 07:04:29.101909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.102178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.944 [2024-05-15 07:04:29.102206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.944 [2024-05-15 07:04:29.102224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.944 [2024-05-15 07:04:29.102372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.944 [2024-05-15 07:04:29.102559] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.944 [2024-05-15 07:04:29.102582] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.945 [2024-05-15 07:04:29.102598] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.945 [2024-05-15 07:04:29.104848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.945 [2024-05-15 07:04:29.114074] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.945 [2024-05-15 07:04:29.114440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.945 [2024-05-15 07:04:29.114692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.945 [2024-05-15 07:04:29.114719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.945 [2024-05-15 07:04:29.114736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.945 [2024-05-15 07:04:29.114884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.945 [2024-05-15 07:04:29.115081] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.945 [2024-05-15 07:04:29.115105] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.945 [2024-05-15 07:04:29.115121] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.945 [2024-05-15 07:04:29.117258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.945 [2024-05-15 07:04:29.126707] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.945 [2024-05-15 07:04:29.127056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.945 [2024-05-15 07:04:29.127309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.945 [2024-05-15 07:04:29.127337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.945 [2024-05-15 07:04:29.127355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.945 [2024-05-15 07:04:29.127521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.945 [2024-05-15 07:04:29.127708] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.945 [2024-05-15 07:04:29.127731] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.945 [2024-05-15 07:04:29.127746] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.945 [2024-05-15 07:04:29.130202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.945 [2024-05-15 07:04:29.139327] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.945 [2024-05-15 07:04:29.139709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.945 [2024-05-15 07:04:29.139964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.945 [2024-05-15 07:04:29.139993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.945 [2024-05-15 07:04:29.140010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.945 [2024-05-15 07:04:29.140176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.945 [2024-05-15 07:04:29.140381] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.945 [2024-05-15 07:04:29.140404] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.945 [2024-05-15 07:04:29.140420] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.945 [2024-05-15 07:04:29.142630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.945 [2024-05-15 07:04:29.152153] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.945 [2024-05-15 07:04:29.152624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.945 [2024-05-15 07:04:29.152963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.945 [2024-05-15 07:04:29.152996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.945 [2024-05-15 07:04:29.153014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.945 [2024-05-15 07:04:29.153215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.945 [2024-05-15 07:04:29.153348] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.945 [2024-05-15 07:04:29.153370] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.945 [2024-05-15 07:04:29.153386] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.945 [2024-05-15 07:04:29.155655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:14.945 [2024-05-15 07:04:29.164820] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:14.945 [2024-05-15 07:04:29.165239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.945 [2024-05-15 07:04:29.165633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:14.945 [2024-05-15 07:04:29.165687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:14.945 [2024-05-15 07:04:29.165704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:14.945 [2024-05-15 07:04:29.165888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:14.945 [2024-05-15 07:04:29.166084] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:14.945 [2024-05-15 07:04:29.166108] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:14.945 [2024-05-15 07:04:29.166123] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:14.945 [2024-05-15 07:04:29.168466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.242 [2024-05-15 07:04:29.177307] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.242 [2024-05-15 07:04:29.177755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.177985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.178014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.242 [2024-05-15 07:04:29.178031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.242 [2024-05-15 07:04:29.178197] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.242 [2024-05-15 07:04:29.178383] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.242 [2024-05-15 07:04:29.178406] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.242 [2024-05-15 07:04:29.178421] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.242 [2024-05-15 07:04:29.180832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.242 [2024-05-15 07:04:29.190031] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.242 [2024-05-15 07:04:29.190483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.190680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.190716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.242 [2024-05-15 07:04:29.190742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.242 [2024-05-15 07:04:29.190945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.242 [2024-05-15 07:04:29.191129] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.242 [2024-05-15 07:04:29.191157] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.242 [2024-05-15 07:04:29.191174] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.242 [2024-05-15 07:04:29.193886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.242 [2024-05-15 07:04:29.202566] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.242 [2024-05-15 07:04:29.202956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.203157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.203185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.242 [2024-05-15 07:04:29.203203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.242 [2024-05-15 07:04:29.203351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.242 [2024-05-15 07:04:29.203543] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.242 [2024-05-15 07:04:29.203567] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.242 [2024-05-15 07:04:29.203583] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.242 [2024-05-15 07:04:29.206035] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.242 [2024-05-15 07:04:29.215186] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.242 [2024-05-15 07:04:29.215558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.215754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.215782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.242 [2024-05-15 07:04:29.215799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.242 [2024-05-15 07:04:29.215959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.242 [2024-05-15 07:04:29.216146] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.242 [2024-05-15 07:04:29.216170] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.242 [2024-05-15 07:04:29.216185] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.242 [2024-05-15 07:04:29.218576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.242 [2024-05-15 07:04:29.227796] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.242 [2024-05-15 07:04:29.228194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.228635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.228689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.242 [2024-05-15 07:04:29.228712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.242 [2024-05-15 07:04:29.228945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.242 [2024-05-15 07:04:29.229133] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.242 [2024-05-15 07:04:29.229156] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.242 [2024-05-15 07:04:29.229172] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.242 [2024-05-15 07:04:29.231585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.242 [2024-05-15 07:04:29.240323] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.242 [2024-05-15 07:04:29.240711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.240960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.240989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.242 [2024-05-15 07:04:29.241007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.242 [2024-05-15 07:04:29.241191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.242 [2024-05-15 07:04:29.241360] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.242 [2024-05-15 07:04:29.241382] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.242 [2024-05-15 07:04:29.241398] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.242 [2024-05-15 07:04:29.243645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.242 [2024-05-15 07:04:29.252856] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.242 [2024-05-15 07:04:29.253252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.253651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.253703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.242 [2024-05-15 07:04:29.253720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.242 [2024-05-15 07:04:29.253867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.242 [2024-05-15 07:04:29.254011] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.242 [2024-05-15 07:04:29.254035] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.242 [2024-05-15 07:04:29.254051] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.242 [2024-05-15 07:04:29.256389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.242 [2024-05-15 07:04:29.265618] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.242 [2024-05-15 07:04:29.265989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.266214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.266241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.242 [2024-05-15 07:04:29.266259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.242 [2024-05-15 07:04:29.266468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.242 [2024-05-15 07:04:29.266674] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.242 [2024-05-15 07:04:29.266697] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.242 [2024-05-15 07:04:29.266712] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.242 [2024-05-15 07:04:29.269062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.242 [2024-05-15 07:04:29.278189] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.242 [2024-05-15 07:04:29.278715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.242 [2024-05-15 07:04:29.278959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.278989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.243 [2024-05-15 07:04:29.279006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.243 [2024-05-15 07:04:29.279189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.243 [2024-05-15 07:04:29.279340] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.243 [2024-05-15 07:04:29.279363] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.243 [2024-05-15 07:04:29.279378] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.243 [2024-05-15 07:04:29.281665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.243 [2024-05-15 07:04:29.290860] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.243 [2024-05-15 07:04:29.291249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.291628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.291689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.243 [2024-05-15 07:04:29.291706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.243 [2024-05-15 07:04:29.291908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.243 [2024-05-15 07:04:29.292087] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.243 [2024-05-15 07:04:29.292111] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.243 [2024-05-15 07:04:29.292128] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.243 [2024-05-15 07:04:29.294575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.243 [2024-05-15 07:04:29.303507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.243 [2024-05-15 07:04:29.303946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.304175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.304203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.243 [2024-05-15 07:04:29.304221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.243 [2024-05-15 07:04:29.304369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.243 [2024-05-15 07:04:29.304580] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.243 [2024-05-15 07:04:29.304603] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.243 [2024-05-15 07:04:29.304619] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.243 [2024-05-15 07:04:29.306670] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.243 [2024-05-15 07:04:29.316106] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.243 [2024-05-15 07:04:29.316449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.316680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.316708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.243 [2024-05-15 07:04:29.316726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.243 [2024-05-15 07:04:29.316874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.243 [2024-05-15 07:04:29.317035] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.243 [2024-05-15 07:04:29.317059] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.243 [2024-05-15 07:04:29.317075] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.243 [2024-05-15 07:04:29.319504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.243 [2024-05-15 07:04:29.328553] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.243 [2024-05-15 07:04:29.328952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.329179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.329210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.243 [2024-05-15 07:04:29.329228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.243 [2024-05-15 07:04:29.329429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.243 [2024-05-15 07:04:29.329633] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.243 [2024-05-15 07:04:29.329657] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.243 [2024-05-15 07:04:29.329672] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.243 [2024-05-15 07:04:29.331874] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.243 [2024-05-15 07:04:29.341125] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.243 [2024-05-15 07:04:29.341688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.341969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.342002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.243 [2024-05-15 07:04:29.342020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.243 [2024-05-15 07:04:29.342211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.243 [2024-05-15 07:04:29.342381] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.243 [2024-05-15 07:04:29.342411] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.243 [2024-05-15 07:04:29.342427] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.243 [2024-05-15 07:04:29.344881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.243 [2024-05-15 07:04:29.353813] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.243 [2024-05-15 07:04:29.354289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.354501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.354532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.243 [2024-05-15 07:04:29.354550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.243 [2024-05-15 07:04:29.354716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.243 [2024-05-15 07:04:29.354885] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.243 [2024-05-15 07:04:29.354909] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.243 [2024-05-15 07:04:29.354924] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.243 [2024-05-15 07:04:29.357351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.243 [2024-05-15 07:04:29.366367] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.243 [2024-05-15 07:04:29.366796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.367020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.367050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.243 [2024-05-15 07:04:29.367067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.243 [2024-05-15 07:04:29.367251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.243 [2024-05-15 07:04:29.367366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.243 [2024-05-15 07:04:29.367388] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.243 [2024-05-15 07:04:29.367404] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.243 [2024-05-15 07:04:29.369581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.243 [2024-05-15 07:04:29.379016] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.243 [2024-05-15 07:04:29.379411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.379609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.243 [2024-05-15 07:04:29.379638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.243 [2024-05-15 07:04:29.379655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.243 [2024-05-15 07:04:29.379803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.243 [2024-05-15 07:04:29.379984] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.243 [2024-05-15 07:04:29.380008] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.243 [2024-05-15 07:04:29.380029] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.243 [2024-05-15 07:04:29.382405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.243 [2024-05-15 07:04:29.391731] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.244 [2024-05-15 07:04:29.392123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.244 [2024-05-15 07:04:29.392297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.244 [2024-05-15 07:04:29.392322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.244 [2024-05-15 07:04:29.392338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.244 [2024-05-15 07:04:29.392476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.244 [2024-05-15 07:04:29.392664] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.244 [2024-05-15 07:04:29.392687] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.244 [2024-05-15 07:04:29.392702] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.244 [2024-05-15 07:04:29.395089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.244 [2024-05-15 07:04:29.404177] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.244 [2024-05-15 07:04:29.404603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.244 [2024-05-15 07:04:29.404830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.244 [2024-05-15 07:04:29.404858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.244 [2024-05-15 07:04:29.404875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.244 [2024-05-15 07:04:29.405071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.244 [2024-05-15 07:04:29.405187] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.244 [2024-05-15 07:04:29.405210] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.244 [2024-05-15 07:04:29.405225] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.244 [2024-05-15 07:04:29.407637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.244 [2024-05-15 07:04:29.416600] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.244 [2024-05-15 07:04:29.417006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.244 [2024-05-15 07:04:29.417277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.244 [2024-05-15 07:04:29.417305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.244 [2024-05-15 07:04:29.417323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.244 [2024-05-15 07:04:29.417488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.244 [2024-05-15 07:04:29.417657] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.244 [2024-05-15 07:04:29.417680] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.244 [2024-05-15 07:04:29.417695] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.244 [2024-05-15 07:04:29.420054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.244 [2024-05-15 07:04:29.429184] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.244 [2024-05-15 07:04:29.429623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.244 [2024-05-15 07:04:29.429827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.244 [2024-05-15 07:04:29.429855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.244 [2024-05-15 07:04:29.429872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.244 [2024-05-15 07:04:29.430062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.244 [2024-05-15 07:04:29.430263] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.244 [2024-05-15 07:04:29.430287] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.244 [2024-05-15 07:04:29.430302] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.244 [2024-05-15 07:04:29.432788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.244 [2024-05-15 07:04:29.441770] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.244 [2024-05-15 07:04:29.442177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.244 [2024-05-15 07:04:29.442381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.244 [2024-05-15 07:04:29.442408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.244 [2024-05-15 07:04:29.442424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.244 [2024-05-15 07:04:29.442565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.244 [2024-05-15 07:04:29.442757] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.244 [2024-05-15 07:04:29.442796] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.244 [2024-05-15 07:04:29.442813] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.244 [2024-05-15 07:04:29.444847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.504 [2024-05-15 07:04:29.454624] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.504 [2024-05-15 07:04:29.455045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.455233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.455259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.504 [2024-05-15 07:04:29.455274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.504 [2024-05-15 07:04:29.455483] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.504 [2024-05-15 07:04:29.455653] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.504 [2024-05-15 07:04:29.455678] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.504 [2024-05-15 07:04:29.455694] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.504 [2024-05-15 07:04:29.457876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.504 [2024-05-15 07:04:29.467179] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.504 [2024-05-15 07:04:29.467805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.468030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.468060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.504 [2024-05-15 07:04:29.468079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.504 [2024-05-15 07:04:29.468228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.504 [2024-05-15 07:04:29.468379] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.504 [2024-05-15 07:04:29.468402] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.504 [2024-05-15 07:04:29.468418] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.504 [2024-05-15 07:04:29.470704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.504 [2024-05-15 07:04:29.479707] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.504 [2024-05-15 07:04:29.480091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.480343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.480372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.504 [2024-05-15 07:04:29.480389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.504 [2024-05-15 07:04:29.480519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.504 [2024-05-15 07:04:29.480706] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.504 [2024-05-15 07:04:29.480728] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.504 [2024-05-15 07:04:29.480745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.504 [2024-05-15 07:04:29.483078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.504 [2024-05-15 07:04:29.492466] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.504 [2024-05-15 07:04:29.492893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.493135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.493160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.504 [2024-05-15 07:04:29.493176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.504 [2024-05-15 07:04:29.493359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.504 [2024-05-15 07:04:29.493565] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.504 [2024-05-15 07:04:29.493588] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.504 [2024-05-15 07:04:29.493604] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.504 [2024-05-15 07:04:29.495947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.504 [2024-05-15 07:04:29.504972] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.504 [2024-05-15 07:04:29.505580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.505891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.505919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.504 [2024-05-15 07:04:29.505944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.504 [2024-05-15 07:04:29.506147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.504 [2024-05-15 07:04:29.506316] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.504 [2024-05-15 07:04:29.506339] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.504 [2024-05-15 07:04:29.506355] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.504 [2024-05-15 07:04:29.508658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.504 [2024-05-15 07:04:29.517452] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.504 [2024-05-15 07:04:29.517825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.518051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.518077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.504 [2024-05-15 07:04:29.518092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.504 [2024-05-15 07:04:29.518241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.504 [2024-05-15 07:04:29.518417] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.504 [2024-05-15 07:04:29.518437] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.504 [2024-05-15 07:04:29.518450] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.504 [2024-05-15 07:04:29.520717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.504 [2024-05-15 07:04:29.530076] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.504 [2024-05-15 07:04:29.530582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.504 [2024-05-15 07:04:29.530826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.530854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.505 [2024-05-15 07:04:29.530871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.505 [2024-05-15 07:04:29.531069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.505 [2024-05-15 07:04:29.531205] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.505 [2024-05-15 07:04:29.531242] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.505 [2024-05-15 07:04:29.531258] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.505 [2024-05-15 07:04:29.533554] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.505 [2024-05-15 07:04:29.542658] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.505 [2024-05-15 07:04:29.543073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.543342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.543393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.505 [2024-05-15 07:04:29.543411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.505 [2024-05-15 07:04:29.543577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.505 [2024-05-15 07:04:29.543746] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.505 [2024-05-15 07:04:29.543768] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.505 [2024-05-15 07:04:29.543784] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.505 [2024-05-15 07:04:29.546075] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.505 [2024-05-15 07:04:29.555152] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.505 [2024-05-15 07:04:29.555662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.555876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.555903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.505 [2024-05-15 07:04:29.555920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.505 [2024-05-15 07:04:29.556112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.505 [2024-05-15 07:04:29.556283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.505 [2024-05-15 07:04:29.556306] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.505 [2024-05-15 07:04:29.556322] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.505 [2024-05-15 07:04:29.558587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.505 [2024-05-15 07:04:29.567382] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.505 [2024-05-15 07:04:29.567808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.568060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.568089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.505 [2024-05-15 07:04:29.568107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.505 [2024-05-15 07:04:29.568255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.505 [2024-05-15 07:04:29.568424] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.505 [2024-05-15 07:04:29.568446] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.505 [2024-05-15 07:04:29.568461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.505 [2024-05-15 07:04:29.570891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.505 [2024-05-15 07:04:29.579971] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.505 [2024-05-15 07:04:29.580415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.580661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.580689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.505 [2024-05-15 07:04:29.580711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.505 [2024-05-15 07:04:29.580860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.505 [2024-05-15 07:04:29.581020] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.505 [2024-05-15 07:04:29.581044] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.505 [2024-05-15 07:04:29.581060] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.505 [2024-05-15 07:04:29.583274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.505 [2024-05-15 07:04:29.592592] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.505 [2024-05-15 07:04:29.592999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.593220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.593248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.505 [2024-05-15 07:04:29.593266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.505 [2024-05-15 07:04:29.593415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.505 [2024-05-15 07:04:29.593566] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.505 [2024-05-15 07:04:29.593588] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.505 [2024-05-15 07:04:29.593604] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.505 [2024-05-15 07:04:29.595954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.505 [2024-05-15 07:04:29.605133] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.505 [2024-05-15 07:04:29.605534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.605754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.605782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.505 [2024-05-15 07:04:29.605800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.505 [2024-05-15 07:04:29.605977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.505 [2024-05-15 07:04:29.606164] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.505 [2024-05-15 07:04:29.606188] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.505 [2024-05-15 07:04:29.606203] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.505 [2024-05-15 07:04:29.608505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.505 [2024-05-15 07:04:29.617777] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.505 [2024-05-15 07:04:29.618236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.618474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.618500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.505 [2024-05-15 07:04:29.618532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.505 [2024-05-15 07:04:29.618757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.505 [2024-05-15 07:04:29.618954] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.505 [2024-05-15 07:04:29.618978] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.505 [2024-05-15 07:04:29.618993] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.505 [2024-05-15 07:04:29.621294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.505 [2024-05-15 07:04:29.630309] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.505 [2024-05-15 07:04:29.630756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.630973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.505 [2024-05-15 07:04:29.630999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.505 [2024-05-15 07:04:29.631015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.505 [2024-05-15 07:04:29.631167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.505 [2024-05-15 07:04:29.631372] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.505 [2024-05-15 07:04:29.631395] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.505 [2024-05-15 07:04:29.631410] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.505 [2024-05-15 07:04:29.633712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.506 [2024-05-15 07:04:29.643204] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.506 [2024-05-15 07:04:29.643724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.643972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.644001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.506 [2024-05-15 07:04:29.644019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.506 [2024-05-15 07:04:29.644148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.506 [2024-05-15 07:04:29.644299] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.506 [2024-05-15 07:04:29.644321] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.506 [2024-05-15 07:04:29.644337] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.506 [2024-05-15 07:04:29.646654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.506 [2024-05-15 07:04:29.655774] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.506 [2024-05-15 07:04:29.656153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.656354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.656382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.506 [2024-05-15 07:04:29.656400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.506 [2024-05-15 07:04:29.656565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.506 [2024-05-15 07:04:29.656704] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.506 [2024-05-15 07:04:29.656727] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.506 [2024-05-15 07:04:29.656744] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.506 [2024-05-15 07:04:29.659224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.506 [2024-05-15 07:04:29.668506] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.506 [2024-05-15 07:04:29.668953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.669205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.669233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.506 [2024-05-15 07:04:29.669251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.506 [2024-05-15 07:04:29.669417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.506 [2024-05-15 07:04:29.669567] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.506 [2024-05-15 07:04:29.669590] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.506 [2024-05-15 07:04:29.669605] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.506 [2024-05-15 07:04:29.671910] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.506 [2024-05-15 07:04:29.681024] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.506 [2024-05-15 07:04:29.681405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.681680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.681708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.506 [2024-05-15 07:04:29.681725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.506 [2024-05-15 07:04:29.681854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.506 [2024-05-15 07:04:29.682069] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.506 [2024-05-15 07:04:29.682093] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.506 [2024-05-15 07:04:29.682109] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.506 [2024-05-15 07:04:29.684395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.506 [2024-05-15 07:04:29.693634] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.506 [2024-05-15 07:04:29.694056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.694396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.694445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.506 [2024-05-15 07:04:29.694462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.506 [2024-05-15 07:04:29.694664] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.506 [2024-05-15 07:04:29.694796] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.506 [2024-05-15 07:04:29.694819] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.506 [2024-05-15 07:04:29.694840] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.506 [2024-05-15 07:04:29.697171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.506 [2024-05-15 07:04:29.706075] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.506 [2024-05-15 07:04:29.706496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.706744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.706771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.506 [2024-05-15 07:04:29.706787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.506 [2024-05-15 07:04:29.706966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.506 [2024-05-15 07:04:29.707160] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.506 [2024-05-15 07:04:29.707184] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.506 [2024-05-15 07:04:29.707200] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.506 [2024-05-15 07:04:29.709642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.506 [2024-05-15 07:04:29.718694] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.506 [2024-05-15 07:04:29.719107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.719313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.719341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.506 [2024-05-15 07:04:29.719359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.506 [2024-05-15 07:04:29.719489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.506 [2024-05-15 07:04:29.719657] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.506 [2024-05-15 07:04:29.719679] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.506 [2024-05-15 07:04:29.719694] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.506 [2024-05-15 07:04:29.722202] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.506 [2024-05-15 07:04:29.731199] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.506 [2024-05-15 07:04:29.731583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.731823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.506 [2024-05-15 07:04:29.731851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.506 [2024-05-15 07:04:29.731869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.506 [2024-05-15 07:04:29.732102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.506 [2024-05-15 07:04:29.732272] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.506 [2024-05-15 07:04:29.732294] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.506 [2024-05-15 07:04:29.732309] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.506 [2024-05-15 07:04:29.734689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.767 [2024-05-15 07:04:29.743761] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.767 [2024-05-15 07:04:29.744199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-05-15 07:04:29.744576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-05-15 07:04:29.744630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.767 [2024-05-15 07:04:29.744648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.767 [2024-05-15 07:04:29.744831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.767 [2024-05-15 07:04:29.744993] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.767 [2024-05-15 07:04:29.745017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.767 [2024-05-15 07:04:29.745033] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.767 [2024-05-15 07:04:29.747263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.767 [2024-05-15 07:04:29.756404] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.767 [2024-05-15 07:04:29.756893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-05-15 07:04:29.757149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-05-15 07:04:29.757192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.767 [2024-05-15 07:04:29.757210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.767 [2024-05-15 07:04:29.757394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.767 [2024-05-15 07:04:29.757562] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.767 [2024-05-15 07:04:29.757585] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.767 [2024-05-15 07:04:29.757600] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.767 [2024-05-15 07:04:29.759815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.767 [2024-05-15 07:04:29.769061] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.767 [2024-05-15 07:04:29.769653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-05-15 07:04:29.769904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-05-15 07:04:29.769940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.767 [2024-05-15 07:04:29.769959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.767 [2024-05-15 07:04:29.770148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.767 [2024-05-15 07:04:29.770353] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.767 [2024-05-15 07:04:29.770376] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.767 [2024-05-15 07:04:29.770391] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.767 [2024-05-15 07:04:29.772624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.767 [2024-05-15 07:04:29.781715] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.767 [2024-05-15 07:04:29.782209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-05-15 07:04:29.782455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-05-15 07:04:29.782483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.767 [2024-05-15 07:04:29.782501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.767 [2024-05-15 07:04:29.782612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.767 [2024-05-15 07:04:29.782762] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.767 [2024-05-15 07:04:29.782784] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.767 [2024-05-15 07:04:29.782800] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.767 [2024-05-15 07:04:29.785022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.767 [2024-05-15 07:04:29.794403] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.767 [2024-05-15 07:04:29.794779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-05-15 07:04:29.795012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-05-15 07:04:29.795038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.767 [2024-05-15 07:04:29.795053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.767 [2024-05-15 07:04:29.795253] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.767 [2024-05-15 07:04:29.795422] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.768 [2024-05-15 07:04:29.795445] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.768 [2024-05-15 07:04:29.795460] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.768 [2024-05-15 07:04:29.797655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.768 [2024-05-15 07:04:29.806900] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.768 [2024-05-15 07:04:29.807310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.807532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.807561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.768 [2024-05-15 07:04:29.807579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.768 [2024-05-15 07:04:29.807691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.768 [2024-05-15 07:04:29.807878] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.768 [2024-05-15 07:04:29.807901] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.768 [2024-05-15 07:04:29.807917] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.768 [2024-05-15 07:04:29.810249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.768 [2024-05-15 07:04:29.819632] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.768 [2024-05-15 07:04:29.820056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.820287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.820315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.768 [2024-05-15 07:04:29.820333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.768 [2024-05-15 07:04:29.820481] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.768 [2024-05-15 07:04:29.820686] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.768 [2024-05-15 07:04:29.820708] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.768 [2024-05-15 07:04:29.820723] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.768 [2024-05-15 07:04:29.822939] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.768 [2024-05-15 07:04:29.832134] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.768 [2024-05-15 07:04:29.832552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.832865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.832905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.768 [2024-05-15 07:04:29.832920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.768 [2024-05-15 07:04:29.833063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.768 [2024-05-15 07:04:29.833250] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.768 [2024-05-15 07:04:29.833273] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.768 [2024-05-15 07:04:29.833289] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.768 [2024-05-15 07:04:29.835448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.768 [2024-05-15 07:04:29.844696] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.768 [2024-05-15 07:04:29.845138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.845414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.845457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.768 [2024-05-15 07:04:29.845475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.768 [2024-05-15 07:04:29.845640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.768 [2024-05-15 07:04:29.845826] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.768 [2024-05-15 07:04:29.845849] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.768 [2024-05-15 07:04:29.845864] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.768 [2024-05-15 07:04:29.848337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.768 [2024-05-15 07:04:29.857386] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.768 [2024-05-15 07:04:29.857773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.858028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.858065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.768 [2024-05-15 07:04:29.858083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.768 [2024-05-15 07:04:29.858231] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.768 [2024-05-15 07:04:29.858400] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.768 [2024-05-15 07:04:29.858422] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.768 [2024-05-15 07:04:29.858437] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.768 [2024-05-15 07:04:29.860850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.768 [2024-05-15 07:04:29.870046] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.768 [2024-05-15 07:04:29.870467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.870746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.870791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.768 [2024-05-15 07:04:29.870809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.768 [2024-05-15 07:04:29.870985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.768 [2024-05-15 07:04:29.871155] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.768 [2024-05-15 07:04:29.871178] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.768 [2024-05-15 07:04:29.871194] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.768 [2024-05-15 07:04:29.873513] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.768 [2024-05-15 07:04:29.882513] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.768 [2024-05-15 07:04:29.882913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.883112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.883140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.768 [2024-05-15 07:04:29.883158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.768 [2024-05-15 07:04:29.883305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.768 [2024-05-15 07:04:29.883474] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.768 [2024-05-15 07:04:29.883497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.768 [2024-05-15 07:04:29.883513] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.768 [2024-05-15 07:04:29.885798] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.768 [2024-05-15 07:04:29.894918] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.768 [2024-05-15 07:04:29.895366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.895568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.895593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.768 [2024-05-15 07:04:29.895614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.768 [2024-05-15 07:04:29.895724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.768 [2024-05-15 07:04:29.895901] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.768 [2024-05-15 07:04:29.895924] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.768 [2024-05-15 07:04:29.895951] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.768 [2024-05-15 07:04:29.898417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.768 [2024-05-15 07:04:29.907431] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.768 [2024-05-15 07:04:29.907789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.908143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-05-15 07:04:29.908201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.768 [2024-05-15 07:04:29.908218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.769 [2024-05-15 07:04:29.908366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.769 [2024-05-15 07:04:29.908536] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.769 [2024-05-15 07:04:29.908559] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.769 [2024-05-15 07:04:29.908574] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.769 [2024-05-15 07:04:29.910840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.769 [2024-05-15 07:04:29.920165] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.769 [2024-05-15 07:04:29.920618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.920843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.920873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.769 [2024-05-15 07:04:29.920890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.769 [2024-05-15 07:04:29.921071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.769 [2024-05-15 07:04:29.921260] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.769 [2024-05-15 07:04:29.921283] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.769 [2024-05-15 07:04:29.921298] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.769 [2024-05-15 07:04:29.923600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.769 [2024-05-15 07:04:29.932784] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.769 [2024-05-15 07:04:29.933904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.934152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.934180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.769 [2024-05-15 07:04:29.934196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.769 [2024-05-15 07:04:29.934418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.769 [2024-05-15 07:04:29.934622] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.769 [2024-05-15 07:04:29.934642] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.769 [2024-05-15 07:04:29.934656] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.769 [2024-05-15 07:04:29.936780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.769 [2024-05-15 07:04:29.945340] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.769 [2024-05-15 07:04:29.945766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.946025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.946051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.769 [2024-05-15 07:04:29.946067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.769 [2024-05-15 07:04:29.946184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.769 [2024-05-15 07:04:29.946293] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.769 [2024-05-15 07:04:29.946316] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.769 [2024-05-15 07:04:29.946331] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.769 [2024-05-15 07:04:29.948755] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.769 [2024-05-15 07:04:29.957864] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.769 [2024-05-15 07:04:29.958283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.958541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.958566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.769 [2024-05-15 07:04:29.958582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.769 [2024-05-15 07:04:29.958771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.769 [2024-05-15 07:04:29.958958] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.769 [2024-05-15 07:04:29.958997] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.769 [2024-05-15 07:04:29.959011] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.769 [2024-05-15 07:04:29.961277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.769 [2024-05-15 07:04:29.970402] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.769 [2024-05-15 07:04:29.970895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.971135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.971163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.769 [2024-05-15 07:04:29.971181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.769 [2024-05-15 07:04:29.971311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.769 [2024-05-15 07:04:29.971487] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.769 [2024-05-15 07:04:29.971510] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.769 [2024-05-15 07:04:29.971525] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.769 [2024-05-15 07:04:29.973899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.769 [2024-05-15 07:04:29.982913] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.769 [2024-05-15 07:04:29.983322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.983534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.983558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.769 [2024-05-15 07:04:29.983574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.769 [2024-05-15 07:04:29.983819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.769 [2024-05-15 07:04:29.984033] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.769 [2024-05-15 07:04:29.984057] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.769 [2024-05-15 07:04:29.984072] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.769 [2024-05-15 07:04:29.986533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:15.769 [2024-05-15 07:04:29.995268] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:15.769 [2024-05-15 07:04:29.995636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.995891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-05-15 07:04:29.995917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:15.769 [2024-05-15 07:04:29.995941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:15.769 [2024-05-15 07:04:29.996077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:15.769 [2024-05-15 07:04:29.996276] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:15.769 [2024-05-15 07:04:29.996301] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:15.769 [2024-05-15 07:04:29.996316] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:15.769 [2024-05-15 07:04:29.998691] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.030 [2024-05-15 07:04:30.007826] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.030 [2024-05-15 07:04:30.008172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.008412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.008441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.030 [2024-05-15 07:04:30.008459] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.030 [2024-05-15 07:04:30.008643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.030 [2024-05-15 07:04:30.008758] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.030 [2024-05-15 07:04:30.008787] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.030 [2024-05-15 07:04:30.008804] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.030 [2024-05-15 07:04:30.011312] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.030 [2024-05-15 07:04:30.020525] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.030 [2024-05-15 07:04:30.020942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.021169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.021195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.030 [2024-05-15 07:04:30.021227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.030 [2024-05-15 07:04:30.021359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.030 [2024-05-15 07:04:30.021582] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.030 [2024-05-15 07:04:30.021605] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.030 [2024-05-15 07:04:30.021621] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.030 [2024-05-15 07:04:30.024178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.030 [2024-05-15 07:04:30.033246] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.030 [2024-05-15 07:04:30.033643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.033853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.033882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.030 [2024-05-15 07:04:30.033901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.030 [2024-05-15 07:04:30.034113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.030 [2024-05-15 07:04:30.034339] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.030 [2024-05-15 07:04:30.034362] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.030 [2024-05-15 07:04:30.034378] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.030 [2024-05-15 07:04:30.036628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.030 [2024-05-15 07:04:30.045797] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.030 [2024-05-15 07:04:30.046209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.046498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.046550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.030 [2024-05-15 07:04:30.046567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.030 [2024-05-15 07:04:30.046752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.030 [2024-05-15 07:04:30.046922] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.030 [2024-05-15 07:04:30.046958] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.030 [2024-05-15 07:04:30.046983] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.030 [2024-05-15 07:04:30.049269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.030 [2024-05-15 07:04:30.058367] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.030 [2024-05-15 07:04:30.058817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.059037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.059063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.030 [2024-05-15 07:04:30.059079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.030 [2024-05-15 07:04:30.059163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.030 [2024-05-15 07:04:30.059378] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.030 [2024-05-15 07:04:30.059401] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.030 [2024-05-15 07:04:30.059417] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.030 [2024-05-15 07:04:30.061577] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.030 [2024-05-15 07:04:30.070979] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.030 [2024-05-15 07:04:30.071573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.071878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.030 [2024-05-15 07:04:30.071906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.031 [2024-05-15 07:04:30.071923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.031 [2024-05-15 07:04:30.072117] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.031 [2024-05-15 07:04:30.072341] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.031 [2024-05-15 07:04:30.072364] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.031 [2024-05-15 07:04:30.072379] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.031 [2024-05-15 07:04:30.074590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.031 [2024-05-15 07:04:30.083499] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.031 [2024-05-15 07:04:30.083868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.084113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.084142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.031 [2024-05-15 07:04:30.084159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.031 [2024-05-15 07:04:30.084325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.031 [2024-05-15 07:04:30.084476] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.031 [2024-05-15 07:04:30.084499] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.031 [2024-05-15 07:04:30.084514] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.031 [2024-05-15 07:04:30.086935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.031 [2024-05-15 07:04:30.096077] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.031 [2024-05-15 07:04:30.096475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.096702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.096728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.031 [2024-05-15 07:04:30.096743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.031 [2024-05-15 07:04:30.096940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.031 [2024-05-15 07:04:30.097091] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.031 [2024-05-15 07:04:30.097114] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.031 [2024-05-15 07:04:30.097130] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.031 [2024-05-15 07:04:30.099557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.031 [2024-05-15 07:04:30.108543] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.031 [2024-05-15 07:04:30.109035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.109266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.109294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.031 [2024-05-15 07:04:30.109311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.031 [2024-05-15 07:04:30.109495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.031 [2024-05-15 07:04:30.109700] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.031 [2024-05-15 07:04:30.109723] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.031 [2024-05-15 07:04:30.109738] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.031 [2024-05-15 07:04:30.112267] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.031 [2024-05-15 07:04:30.120974] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.031 [2024-05-15 07:04:30.121397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.121814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.121878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.031 [2024-05-15 07:04:30.121895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.031 [2024-05-15 07:04:30.122053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.031 [2024-05-15 07:04:30.122205] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.031 [2024-05-15 07:04:30.122228] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.031 [2024-05-15 07:04:30.122244] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.031 [2024-05-15 07:04:30.124492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.031 [2024-05-15 07:04:30.133414] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.031 [2024-05-15 07:04:30.133836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.134092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.134119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.031 [2024-05-15 07:04:30.134135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.031 [2024-05-15 07:04:30.134307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.031 [2024-05-15 07:04:30.134502] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.031 [2024-05-15 07:04:30.134525] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.031 [2024-05-15 07:04:30.134540] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.031 [2024-05-15 07:04:30.136832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.031 [2024-05-15 07:04:30.145900] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.031 [2024-05-15 07:04:30.146354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.146596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.146637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.031 [2024-05-15 07:04:30.146655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.031 [2024-05-15 07:04:30.146857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.031 [2024-05-15 07:04:30.147035] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.031 [2024-05-15 07:04:30.147058] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.031 [2024-05-15 07:04:30.147074] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.031 [2024-05-15 07:04:30.149519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.031 [2024-05-15 07:04:30.158448] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.031 [2024-05-15 07:04:30.158989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.159193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.159221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.031 [2024-05-15 07:04:30.159239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.031 [2024-05-15 07:04:30.159405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.031 [2024-05-15 07:04:30.159556] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.031 [2024-05-15 07:04:30.159579] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.031 [2024-05-15 07:04:30.159595] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.031 [2024-05-15 07:04:30.161612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.031 [2024-05-15 07:04:30.171036] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.031 [2024-05-15 07:04:30.171640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.171900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.171947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.031 [2024-05-15 07:04:30.171967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.031 [2024-05-15 07:04:30.172133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.031 [2024-05-15 07:04:30.172248] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.031 [2024-05-15 07:04:30.172271] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.031 [2024-05-15 07:04:30.172294] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.031 [2024-05-15 07:04:30.174560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.031 [2024-05-15 07:04:30.183460] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.031 [2024-05-15 07:04:30.183795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.183999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.031 [2024-05-15 07:04:30.184025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.031 [2024-05-15 07:04:30.184041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.032 [2024-05-15 07:04:30.184241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.032 [2024-05-15 07:04:30.184446] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.032 [2024-05-15 07:04:30.184469] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.032 [2024-05-15 07:04:30.184485] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.032 [2024-05-15 07:04:30.187026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.032 [2024-05-15 07:04:30.195825] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.032 [2024-05-15 07:04:30.196275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.196705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.196751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.032 [2024-05-15 07:04:30.196768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.032 [2024-05-15 07:04:30.196916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.032 [2024-05-15 07:04:30.197112] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.032 [2024-05-15 07:04:30.197136] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.032 [2024-05-15 07:04:30.197152] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.032 [2024-05-15 07:04:30.199490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.032 [2024-05-15 07:04:30.208476] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.032 [2024-05-15 07:04:30.209045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.209283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.209313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.032 [2024-05-15 07:04:30.209330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.032 [2024-05-15 07:04:30.209479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.032 [2024-05-15 07:04:30.209690] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.032 [2024-05-15 07:04:30.209714] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.032 [2024-05-15 07:04:30.209730] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.032 [2024-05-15 07:04:30.211914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.032 [2024-05-15 07:04:30.220993] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.032 [2024-05-15 07:04:30.221355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.221608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.221635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.032 [2024-05-15 07:04:30.221653] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.032 [2024-05-15 07:04:30.221782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.032 [2024-05-15 07:04:30.221914] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.032 [2024-05-15 07:04:30.221945] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.032 [2024-05-15 07:04:30.221962] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.032 [2024-05-15 07:04:30.224119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.032 [2024-05-15 07:04:30.233769] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.032 [2024-05-15 07:04:30.234143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.234384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.234409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.032 [2024-05-15 07:04:30.234425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.032 [2024-05-15 07:04:30.234594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.032 [2024-05-15 07:04:30.234800] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.032 [2024-05-15 07:04:30.234822] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.032 [2024-05-15 07:04:30.234838] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.032 [2024-05-15 07:04:30.237185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.032 [2024-05-15 07:04:30.246485] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.032 [2024-05-15 07:04:30.246899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.247103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.247132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.032 [2024-05-15 07:04:30.247155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.032 [2024-05-15 07:04:30.247376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.032 [2024-05-15 07:04:30.247562] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.032 [2024-05-15 07:04:30.247585] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.032 [2024-05-15 07:04:30.247601] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.032 [2024-05-15 07:04:30.249887] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.032 [2024-05-15 07:04:30.259008] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.032 [2024-05-15 07:04:30.259391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.259603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.032 [2024-05-15 07:04:30.259630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.032 [2024-05-15 07:04:30.259646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.032 [2024-05-15 07:04:30.259790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.032 [2024-05-15 07:04:30.259986] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.032 [2024-05-15 07:04:30.260010] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.032 [2024-05-15 07:04:30.260025] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.032 [2024-05-15 07:04:30.262186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.292 [2024-05-15 07:04:30.271444] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.292 [2024-05-15 07:04:30.271987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.292 [2024-05-15 07:04:30.272212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.292 [2024-05-15 07:04:30.272257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.292 [2024-05-15 07:04:30.272274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.292 [2024-05-15 07:04:30.272489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.292 [2024-05-15 07:04:30.272604] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.292 [2024-05-15 07:04:30.272627] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.292 [2024-05-15 07:04:30.272642] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.292 [2024-05-15 07:04:30.274994] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.292 [2024-05-15 07:04:30.283707] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.292 [2024-05-15 07:04:30.284079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.292 [2024-05-15 07:04:30.284304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.292 [2024-05-15 07:04:30.284333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.292 [2024-05-15 07:04:30.284350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.292 [2024-05-15 07:04:30.284521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.292 [2024-05-15 07:04:30.284690] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.292 [2024-05-15 07:04:30.284742] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.292 [2024-05-15 07:04:30.284758] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.292 [2024-05-15 07:04:30.287074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.292 [2024-05-15 07:04:30.296236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.292 [2024-05-15 07:04:30.296667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.292 [2024-05-15 07:04:30.297011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.292 [2024-05-15 07:04:30.297041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.292 [2024-05-15 07:04:30.297058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.292 [2024-05-15 07:04:30.297243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.292 [2024-05-15 07:04:30.297394] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.292 [2024-05-15 07:04:30.297416] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.292 [2024-05-15 07:04:30.297431] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.292 [2024-05-15 07:04:30.299879] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.292 [2024-05-15 07:04:30.308740] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.292 [2024-05-15 07:04:30.309171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.292 [2024-05-15 07:04:30.309526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.292 [2024-05-15 07:04:30.309590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.292 [2024-05-15 07:04:30.309608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.292 [2024-05-15 07:04:30.309773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.292 [2024-05-15 07:04:30.309888] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.292 [2024-05-15 07:04:30.309910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.292 [2024-05-15 07:04:30.309926] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.292 [2024-05-15 07:04:30.312351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.292 [2024-05-15 07:04:30.321182] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.292 [2024-05-15 07:04:30.321722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.292 [2024-05-15 07:04:30.321967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.292 [2024-05-15 07:04:30.321996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.292 [2024-05-15 07:04:30.322014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.292 [2024-05-15 07:04:30.322216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.293 [2024-05-15 07:04:30.322409] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.293 [2024-05-15 07:04:30.322432] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.293 [2024-05-15 07:04:30.322448] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.293 [2024-05-15 07:04:30.324771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.293 [2024-05-15 07:04:30.333699] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.293 [2024-05-15 07:04:30.334109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.334415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.334443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.293 [2024-05-15 07:04:30.334460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.293 [2024-05-15 07:04:30.334662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.293 [2024-05-15 07:04:30.334848] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.293 [2024-05-15 07:04:30.334871] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.293 [2024-05-15 07:04:30.334887] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.293 [2024-05-15 07:04:30.337193] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.293 [2024-05-15 07:04:30.346193] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.293 [2024-05-15 07:04:30.346763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.347092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.347122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.293 [2024-05-15 07:04:30.347140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.293 [2024-05-15 07:04:30.347324] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.293 [2024-05-15 07:04:30.347475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.293 [2024-05-15 07:04:30.347497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.293 [2024-05-15 07:04:30.347512] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.293 [2024-05-15 07:04:30.349815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.293 [2024-05-15 07:04:30.358722] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.293 [2024-05-15 07:04:30.359091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.359420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.359470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.293 [2024-05-15 07:04:30.359488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.293 [2024-05-15 07:04:30.359599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.293 [2024-05-15 07:04:30.359767] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.293 [2024-05-15 07:04:30.359796] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.293 [2024-05-15 07:04:30.359812] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.293 [2024-05-15 07:04:30.361983] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.293 [2024-05-15 07:04:30.371055] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.293 [2024-05-15 07:04:30.371499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.371718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.371743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.293 [2024-05-15 07:04:30.371759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.293 [2024-05-15 07:04:30.371920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.293 [2024-05-15 07:04:30.372155] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.293 [2024-05-15 07:04:30.372179] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.293 [2024-05-15 07:04:30.372194] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.293 [2024-05-15 07:04:30.374426] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.293 [2024-05-15 07:04:30.383762] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.293 [2024-05-15 07:04:30.384336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.384662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.384693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.293 [2024-05-15 07:04:30.384712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.293 [2024-05-15 07:04:30.384920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.293 [2024-05-15 07:04:30.385067] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.293 [2024-05-15 07:04:30.385090] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.293 [2024-05-15 07:04:30.385106] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.293 [2024-05-15 07:04:30.387375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.293 [2024-05-15 07:04:30.396707] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.293 [2024-05-15 07:04:30.397119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.397352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.397380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.293 [2024-05-15 07:04:30.397398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.293 [2024-05-15 07:04:30.397547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.293 [2024-05-15 07:04:30.397734] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.293 [2024-05-15 07:04:30.397757] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.293 [2024-05-15 07:04:30.397779] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.293 [2024-05-15 07:04:30.400075] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.293 [2024-05-15 07:04:30.409267] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.293 [2024-05-15 07:04:30.409642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.409878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.409906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.293 [2024-05-15 07:04:30.409923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.293 [2024-05-15 07:04:30.410083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.293 [2024-05-15 07:04:30.410270] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.293 [2024-05-15 07:04:30.410293] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.293 [2024-05-15 07:04:30.410308] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.293 [2024-05-15 07:04:30.412540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.293 [2024-05-15 07:04:30.421995] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.293 [2024-05-15 07:04:30.422460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.422835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.293 [2024-05-15 07:04:30.422890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.293 [2024-05-15 07:04:30.422908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.293 [2024-05-15 07:04:30.423103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.293 [2024-05-15 07:04:30.423272] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.293 [2024-05-15 07:04:30.423294] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.294 [2024-05-15 07:04:30.423309] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.294 [2024-05-15 07:04:30.425558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.294 [2024-05-15 07:04:30.434550] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.294 [2024-05-15 07:04:30.434962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.435174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.435204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.294 [2024-05-15 07:04:30.435222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.294 [2024-05-15 07:04:30.435424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.294 [2024-05-15 07:04:30.435593] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.294 [2024-05-15 07:04:30.435616] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.294 [2024-05-15 07:04:30.435631] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.294 [2024-05-15 07:04:30.437722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.294 [2024-05-15 07:04:30.447114] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.294 [2024-05-15 07:04:30.447502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.447799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.447824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.294 [2024-05-15 07:04:30.447855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.294 [2024-05-15 07:04:30.448044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.294 [2024-05-15 07:04:30.448196] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.294 [2024-05-15 07:04:30.448218] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.294 [2024-05-15 07:04:30.448234] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.294 [2024-05-15 07:04:30.450503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.294 [2024-05-15 07:04:30.460045] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.294 [2024-05-15 07:04:30.460393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.460783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.460840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.294 [2024-05-15 07:04:30.460858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.294 [2024-05-15 07:04:30.461052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.294 [2024-05-15 07:04:30.461208] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.294 [2024-05-15 07:04:30.461232] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.294 [2024-05-15 07:04:30.461247] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.294 [2024-05-15 07:04:30.463657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.294 [2024-05-15 07:04:30.472680] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.294 [2024-05-15 07:04:30.473069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.473328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.473353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.294 [2024-05-15 07:04:30.473368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.294 [2024-05-15 07:04:30.473534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.294 [2024-05-15 07:04:30.473704] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.294 [2024-05-15 07:04:30.473727] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.294 [2024-05-15 07:04:30.473742] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.294 [2024-05-15 07:04:30.476126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.294 [2024-05-15 07:04:30.485298] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.294 [2024-05-15 07:04:30.485770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.486017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.486046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.294 [2024-05-15 07:04:30.486064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.294 [2024-05-15 07:04:30.486265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.294 [2024-05-15 07:04:30.486453] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.294 [2024-05-15 07:04:30.486475] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.294 [2024-05-15 07:04:30.486491] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.294 [2024-05-15 07:04:30.488901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.294 [2024-05-15 07:04:30.497797] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.294 [2024-05-15 07:04:30.498279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.498616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.498643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.294 [2024-05-15 07:04:30.498660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.294 [2024-05-15 07:04:30.498836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.294 [2024-05-15 07:04:30.499004] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.294 [2024-05-15 07:04:30.499029] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.294 [2024-05-15 07:04:30.499045] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.294 [2024-05-15 07:04:30.501330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.294 [2024-05-15 07:04:30.510330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.294 [2024-05-15 07:04:30.510707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.510976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.511006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.294 [2024-05-15 07:04:30.511023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.294 [2024-05-15 07:04:30.511207] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.294 [2024-05-15 07:04:30.511377] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.294 [2024-05-15 07:04:30.511400] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.294 [2024-05-15 07:04:30.511416] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.294 [2024-05-15 07:04:30.513613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.294 [2024-05-15 07:04:30.523048] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.294 [2024-05-15 07:04:30.523469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.523701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.294 [2024-05-15 07:04:30.523729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.294 [2024-05-15 07:04:30.523751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.294 [2024-05-15 07:04:30.523887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.294 [2024-05-15 07:04:30.524052] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.294 [2024-05-15 07:04:30.524076] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.294 [2024-05-15 07:04:30.524092] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.557 [2024-05-15 07:04:30.526505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.557 [2024-05-15 07:04:30.535401] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.557 [2024-05-15 07:04:30.535793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.557 [2024-05-15 07:04:30.536056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.557 [2024-05-15 07:04:30.536083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.557 [2024-05-15 07:04:30.536099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.557 [2024-05-15 07:04:30.536239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.557 [2024-05-15 07:04:30.536409] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.557 [2024-05-15 07:04:30.536432] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.557 [2024-05-15 07:04:30.536447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.557 [2024-05-15 07:04:30.538736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.557 [2024-05-15 07:04:30.548013] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.557 [2024-05-15 07:04:30.548346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.557 [2024-05-15 07:04:30.548580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.557 [2024-05-15 07:04:30.548605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.557 [2024-05-15 07:04:30.548620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.558 [2024-05-15 07:04:30.548762] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.558 [2024-05-15 07:04:30.548944] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.558 [2024-05-15 07:04:30.548982] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.558 [2024-05-15 07:04:30.548996] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.558 [2024-05-15 07:04:30.551492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.558 [2024-05-15 07:04:30.560569] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.558 [2024-05-15 07:04:30.561015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.561196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.561222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.558 [2024-05-15 07:04:30.561244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.558 [2024-05-15 07:04:30.561426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.558 [2024-05-15 07:04:30.561577] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.558 [2024-05-15 07:04:30.561601] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.558 [2024-05-15 07:04:30.561617] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.558 [2024-05-15 07:04:30.564078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.558 [2024-05-15 07:04:30.573354] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.558 [2024-05-15 07:04:30.574015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.574286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.574315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.558 [2024-05-15 07:04:30.574333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.558 [2024-05-15 07:04:30.574463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.558 [2024-05-15 07:04:30.574632] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.558 [2024-05-15 07:04:30.574655] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.558 [2024-05-15 07:04:30.574671] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.558 [2024-05-15 07:04:30.577085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.558 [2024-05-15 07:04:30.585988] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.558 [2024-05-15 07:04:30.586462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.586679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.586709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.558 [2024-05-15 07:04:30.586727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.558 [2024-05-15 07:04:30.586856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.558 [2024-05-15 07:04:30.587016] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.558 [2024-05-15 07:04:30.587040] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.558 [2024-05-15 07:04:30.587055] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.558 [2024-05-15 07:04:30.589417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.558 [2024-05-15 07:04:30.598681] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.558 [2024-05-15 07:04:30.599065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.599295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.599324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.558 [2024-05-15 07:04:30.599341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.558 [2024-05-15 07:04:30.599496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.558 [2024-05-15 07:04:30.599630] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.558 [2024-05-15 07:04:30.599653] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.558 [2024-05-15 07:04:30.599668] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.558 [2024-05-15 07:04:30.601993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.558 [2024-05-15 07:04:30.611243] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.558 [2024-05-15 07:04:30.611661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.612024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.612054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.558 [2024-05-15 07:04:30.612072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.558 [2024-05-15 07:04:30.612238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.558 [2024-05-15 07:04:30.612408] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.558 [2024-05-15 07:04:30.612430] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.558 [2024-05-15 07:04:30.612447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.558 [2024-05-15 07:04:30.614752] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.558 [2024-05-15 07:04:30.623814] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.558 [2024-05-15 07:04:30.624289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.624549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.624577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.558 [2024-05-15 07:04:30.624595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.558 [2024-05-15 07:04:30.624744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.558 [2024-05-15 07:04:30.624858] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.558 [2024-05-15 07:04:30.624881] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.558 [2024-05-15 07:04:30.624896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.558 [2024-05-15 07:04:30.627229] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.558 [2024-05-15 07:04:30.636349] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.558 [2024-05-15 07:04:30.636833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.637065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.637094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.558 [2024-05-15 07:04:30.637112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.558 [2024-05-15 07:04:30.637260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.558 [2024-05-15 07:04:30.637489] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.558 [2024-05-15 07:04:30.637513] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.558 [2024-05-15 07:04:30.637529] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.558 [2024-05-15 07:04:30.639804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.558 [2024-05-15 07:04:30.649078] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.558 [2024-05-15 07:04:30.649509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.649750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.649775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.558 [2024-05-15 07:04:30.649791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.558 [2024-05-15 07:04:30.650005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.558 [2024-05-15 07:04:30.650175] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.558 [2024-05-15 07:04:30.650198] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.558 [2024-05-15 07:04:30.650213] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.558 [2024-05-15 07:04:30.652571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.558 [2024-05-15 07:04:30.661613] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.558 [2024-05-15 07:04:30.662087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.662380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.558 [2024-05-15 07:04:30.662427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.558 [2024-05-15 07:04:30.662445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.558 [2024-05-15 07:04:30.662592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.558 [2024-05-15 07:04:30.662768] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.558 [2024-05-15 07:04:30.662791] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.558 [2024-05-15 07:04:30.662806] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.558 [2024-05-15 07:04:30.665118] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.558 [2024-05-15 07:04:30.674135] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.559 [2024-05-15 07:04:30.674724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.675001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.675030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.559 [2024-05-15 07:04:30.675048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.559 [2024-05-15 07:04:30.675213] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.559 [2024-05-15 07:04:30.675363] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.559 [2024-05-15 07:04:30.675392] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.559 [2024-05-15 07:04:30.675408] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.559 [2024-05-15 07:04:30.677639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.559 [2024-05-15 07:04:30.686817] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.559 [2024-05-15 07:04:30.687196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.687452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.687476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.559 [2024-05-15 07:04:30.687491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.559 [2024-05-15 07:04:30.687619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.559 [2024-05-15 07:04:30.687769] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.559 [2024-05-15 07:04:30.687792] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.559 [2024-05-15 07:04:30.687807] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 624154 Killed "${NVMF_APP[@]}" "$@" 00:27:16.559 07:04:30 -- host/bdevperf.sh@36 -- # tgt_init 00:27:16.559 07:04:30 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:16.559 07:04:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:16.559 07:04:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:16.559 07:04:30 -- common/autotest_common.sh@10 -- # set +x 00:27:16.559 [2024-05-15 07:04:30.690259] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.559 07:04:30 -- nvmf/common.sh@469 -- # nvmfpid=625269 00:27:16.559 07:04:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:16.559 07:04:30 -- nvmf/common.sh@470 -- # waitforlisten 625269 00:27:16.559 07:04:30 -- common/autotest_common.sh@819 -- # '[' -z 625269 ']' 00:27:16.559 07:04:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.559 07:04:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:16.559 07:04:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.559 07:04:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:16.559 07:04:30 -- common/autotest_common.sh@10 -- # set +x 00:27:16.559 [2024-05-15 07:04:30.699394] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.559 [2024-05-15 07:04:30.699937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.700169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.700194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.559 [2024-05-15 07:04:30.700210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.559 [2024-05-15 07:04:30.700402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.559 [2024-05-15 07:04:30.700547] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.559 [2024-05-15 07:04:30.700570] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.559 [2024-05-15 07:04:30.700591] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.559 [2024-05-15 07:04:30.703014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.559 [2024-05-15 07:04:30.712020] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.559 [2024-05-15 07:04:30.712659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.712923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.712958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.559 [2024-05-15 07:04:30.712976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.559 [2024-05-15 07:04:30.713142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.559 [2024-05-15 07:04:30.713319] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.559 [2024-05-15 07:04:30.713343] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.559 [2024-05-15 07:04:30.713359] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.559 [2024-05-15 07:04:30.715953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.559 [2024-05-15 07:04:30.724667] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.559 [2024-05-15 07:04:30.725062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.725280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.725310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.559 [2024-05-15 07:04:30.725327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.559 [2024-05-15 07:04:30.725511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.559 [2024-05-15 07:04:30.725698] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.559 [2024-05-15 07:04:30.725722] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.559 [2024-05-15 07:04:30.725737] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.559 [2024-05-15 07:04:30.728173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.559 [2024-05-15 07:04:30.735078] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:16.559 [2024-05-15 07:04:30.735153] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.559 [2024-05-15 07:04:30.737312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.559 [2024-05-15 07:04:30.737806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.737999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.738028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.559 [2024-05-15 07:04:30.738047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.559 [2024-05-15 07:04:30.738195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.559 [2024-05-15 07:04:30.738347] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.559 [2024-05-15 07:04:30.738375] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.559 [2024-05-15 07:04:30.738392] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.559 [2024-05-15 07:04:30.740784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.559 [2024-05-15 07:04:30.749875] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.559 [2024-05-15 07:04:30.750219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.750445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.750469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.559 [2024-05-15 07:04:30.750484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.559 [2024-05-15 07:04:30.750646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.559 [2024-05-15 07:04:30.750852] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.559 [2024-05-15 07:04:30.750875] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.559 [2024-05-15 07:04:30.750890] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.559 [2024-05-15 07:04:30.753291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.559 [2024-05-15 07:04:30.762449] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.559 [2024-05-15 07:04:30.762855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.763082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 [2024-05-15 07:04:30.763111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.559 [2024-05-15 07:04:30.763129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.559 [2024-05-15 07:04:30.763277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.559 [2024-05-15 07:04:30.763446] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.559 [2024-05-15 07:04:30.763469] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.559 [2024-05-15 07:04:30.763484] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.559 [2024-05-15 07:04:30.765678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.559 [2024-05-15 07:04:30.775026] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.559 [2024-05-15 07:04:30.775446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.559 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.559 [2024-05-15 07:04:30.775676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.560 [2024-05-15 07:04:30.775702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.560 [2024-05-15 07:04:30.775741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.560 [2024-05-15 07:04:30.775957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.560 [2024-05-15 07:04:30.776163] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.560 [2024-05-15 07:04:30.776186] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.560 [2024-05-15 07:04:30.776207] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.560 [2024-05-15 07:04:30.778485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.560 [2024-05-15 07:04:30.787542] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.560 [2024-05-15 07:04:30.787976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.560 [2024-05-15 07:04:30.788178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.560 [2024-05-15 07:04:30.788204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.560 [2024-05-15 07:04:30.788219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.560 [2024-05-15 07:04:30.788393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.560 [2024-05-15 07:04:30.788508] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.560 [2024-05-15 07:04:30.788531] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.560 [2024-05-15 07:04:30.788546] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.560 [2024-05-15 07:04:30.790909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.820 [2024-05-15 07:04:30.800148] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.820 [2024-05-15 07:04:30.800556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.800810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.800838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.820 [2024-05-15 07:04:30.800856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.820 [2024-05-15 07:04:30.801074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.820 [2024-05-15 07:04:30.801207] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.820 [2024-05-15 07:04:30.801243] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.820 [2024-05-15 07:04:30.801256] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.820 [2024-05-15 07:04:30.803388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.820 [2024-05-15 07:04:30.812763] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.820 [2024-05-15 07:04:30.813158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.813364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.813392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.820 [2024-05-15 07:04:30.813409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.820 [2024-05-15 07:04:30.813593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.820 [2024-05-15 07:04:30.813725] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.820 [2024-05-15 07:04:30.813748] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.820 [2024-05-15 07:04:30.813764] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.820 [2024-05-15 07:04:30.816187] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.820 [2024-05-15 07:04:30.817992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:16.820 [2024-05-15 07:04:30.825352] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.820 [2024-05-15 07:04:30.825846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.826118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.826145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.820 [2024-05-15 07:04:30.826163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.820 [2024-05-15 07:04:30.826362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.820 [2024-05-15 07:04:30.826498] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.820 [2024-05-15 07:04:30.826522] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.820 [2024-05-15 07:04:30.826540] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.820 [2024-05-15 07:04:30.828882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.820 [2024-05-15 07:04:30.837940] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.820 [2024-05-15 07:04:30.838465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.838728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.838757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.820 [2024-05-15 07:04:30.838776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.820 [2024-05-15 07:04:30.838928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.820 [2024-05-15 07:04:30.839116] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.820 [2024-05-15 07:04:30.839136] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.820 [2024-05-15 07:04:30.839150] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.820 [2024-05-15 07:04:30.841449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.820 [2024-05-15 07:04:30.850575] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.820 [2024-05-15 07:04:30.851049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.851278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.851306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.820 [2024-05-15 07:04:30.851324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.820 [2024-05-15 07:04:30.851526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.820 [2024-05-15 07:04:30.851713] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.820 [2024-05-15 07:04:30.851737] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.820 [2024-05-15 07:04:30.851752] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.820 [2024-05-15 07:04:30.854128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.820 [2024-05-15 07:04:30.863089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.820 [2024-05-15 07:04:30.863561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.863786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.863814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.820 [2024-05-15 07:04:30.863832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.820 [2024-05-15 07:04:30.864004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.820 [2024-05-15 07:04:30.864170] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.820 [2024-05-15 07:04:30.864191] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.820 [2024-05-15 07:04:30.864205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.820 [2024-05-15 07:04:30.866539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.820 [2024-05-15 07:04:30.875577] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.820 [2024-05-15 07:04:30.876089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.876323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.876352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.820 [2024-05-15 07:04:30.876369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.820 [2024-05-15 07:04:30.876535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.820 [2024-05-15 07:04:30.876758] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.820 [2024-05-15 07:04:30.876782] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.820 [2024-05-15 07:04:30.876797] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.820 [2024-05-15 07:04:30.879482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.820 [2024-05-15 07:04:30.888268] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.820 [2024-05-15 07:04:30.888872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.889130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.889157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.820 [2024-05-15 07:04:30.889175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.820 [2024-05-15 07:04:30.889342] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.820 [2024-05-15 07:04:30.889515] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.820 [2024-05-15 07:04:30.889539] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.820 [2024-05-15 07:04:30.889557] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.820 [2024-05-15 07:04:30.891856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.820 [2024-05-15 07:04:30.900784] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.820 [2024-05-15 07:04:30.901328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.901559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.820 [2024-05-15 07:04:30.901587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.820 [2024-05-15 07:04:30.901604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.821 [2024-05-15 07:04:30.901752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.821 [2024-05-15 07:04:30.901980] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.821 [2024-05-15 07:04:30.902001] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.821 [2024-05-15 07:04:30.902014] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.821 [2024-05-15 07:04:30.904171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.821 [2024-05-15 07:04:30.913452] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.821 [2024-05-15 07:04:30.913896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.914155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.914181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.821 [2024-05-15 07:04:30.914197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.821 [2024-05-15 07:04:30.914378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.821 [2024-05-15 07:04:30.914494] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.821 [2024-05-15 07:04:30.914516] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.821 [2024-05-15 07:04:30.914532] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.821 [2024-05-15 07:04:30.916896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.821 [2024-05-15 07:04:30.926104] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.821 [2024-05-15 07:04:30.926540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.926757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.926784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.821 [2024-05-15 07:04:30.926802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.821 [2024-05-15 07:04:30.926939] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.821 [2024-05-15 07:04:30.927115] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.821 [2024-05-15 07:04:30.927134] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.821 [2024-05-15 07:04:30.927147] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.821 [2024-05-15 07:04:30.929480] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.821 [2024-05-15 07:04:30.933421] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:16.821 [2024-05-15 07:04:30.933561] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.821 [2024-05-15 07:04:30.933584] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.821 [2024-05-15 07:04:30.933597] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.821 [2024-05-15 07:04:30.933651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.821 [2024-05-15 07:04:30.933711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.821 [2024-05-15 07:04:30.933714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.821 [2024-05-15 07:04:30.938528] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.821 [2024-05-15 07:04:30.938921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.939294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.939319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.821 [2024-05-15 07:04:30.939335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.821 [2024-05-15 07:04:30.939506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.821 [2024-05-15 07:04:30.939662] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.821 [2024-05-15 07:04:30.939683] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.821 [2024-05-15 07:04:30.939698] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.821 [2024-05-15 07:04:30.941902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.821 [2024-05-15 07:04:30.950880] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.821 [2024-05-15 07:04:30.951424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.951673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.951702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.821 [2024-05-15 07:04:30.951722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.821 [2024-05-15 07:04:30.951895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.821 [2024-05-15 07:04:30.952082] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.821 [2024-05-15 07:04:30.952105] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.821 [2024-05-15 07:04:30.952122] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.821 [2024-05-15 07:04:30.954322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.821 [2024-05-15 07:04:30.963040] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.821 [2024-05-15 07:04:30.963565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.963776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.963802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.821 [2024-05-15 07:04:30.963821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.821 [2024-05-15 07:04:30.964001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.821 [2024-05-15 07:04:30.964144] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.821 [2024-05-15 07:04:30.964177] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.821 [2024-05-15 07:04:30.964193] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.821 [2024-05-15 07:04:30.966437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.821 [2024-05-15 07:04:30.975406] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.821 [2024-05-15 07:04:30.975906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.976151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.976178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.821 [2024-05-15 07:04:30.976197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.821 [2024-05-15 07:04:30.976389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.821 [2024-05-15 07:04:30.976588] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.821 [2024-05-15 07:04:30.976610] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.821 [2024-05-15 07:04:30.976626] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.821 [2024-05-15 07:04:30.978761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.821 [2024-05-15 07:04:30.987913] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.821 [2024-05-15 07:04:30.988495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.988713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:30.988738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.821 [2024-05-15 07:04:30.988757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.821 [2024-05-15 07:04:30.988955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.821 [2024-05-15 07:04:30.989133] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.821 [2024-05-15 07:04:30.989155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.821 [2024-05-15 07:04:30.989171] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.821 [2024-05-15 07:04:30.991378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.821 [2024-05-15 07:04:31.000255] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.821 [2024-05-15 07:04:31.000694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:31.000911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:31.000943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.821 [2024-05-15 07:04:31.000962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.821 [2024-05-15 07:04:31.001118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.821 [2024-05-15 07:04:31.001254] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.821 [2024-05-15 07:04:31.001275] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.821 [2024-05-15 07:04:31.001301] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.821 [2024-05-15 07:04:31.003444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.821 [2024-05-15 07:04:31.012501] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.821 [2024-05-15 07:04:31.013168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:31.013490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.821 [2024-05-15 07:04:31.013516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.821 [2024-05-15 07:04:31.013534] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.822 [2024-05-15 07:04:31.013721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.822 [2024-05-15 07:04:31.013889] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.822 [2024-05-15 07:04:31.013910] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.822 [2024-05-15 07:04:31.013949] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.822 [2024-05-15 07:04:31.015998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.822 [2024-05-15 07:04:31.024789] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.822 [2024-05-15 07:04:31.025127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.822 [2024-05-15 07:04:31.025339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.822 [2024-05-15 07:04:31.025364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.822 [2024-05-15 07:04:31.025380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.822 [2024-05-15 07:04:31.025563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.822 [2024-05-15 07:04:31.025776] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.822 [2024-05-15 07:04:31.025796] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.822 [2024-05-15 07:04:31.025810] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.822 [2024-05-15 07:04:31.027849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.822 [2024-05-15 07:04:31.037287] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.822 [2024-05-15 07:04:31.037651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.822 [2024-05-15 07:04:31.037861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.822 [2024-05-15 07:04:31.037886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.822 [2024-05-15 07:04:31.037902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.822 [2024-05-15 07:04:31.038043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.822 [2024-05-15 07:04:31.038179] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.822 [2024-05-15 07:04:31.038200] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.822 [2024-05-15 07:04:31.038213] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.822 [2024-05-15 07:04:31.040343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.822 [2024-05-15 07:04:31.049618] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.822 [2024-05-15 07:04:31.049978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.822 [2024-05-15 07:04:31.050163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.822 [2024-05-15 07:04:31.050189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:16.822 [2024-05-15 07:04:31.050204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:16.822 [2024-05-15 07:04:31.050321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:16.822 [2024-05-15 07:04:31.050455] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.822 [2024-05-15 07:04:31.050476] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.822 [2024-05-15 07:04:31.050489] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.822 [2024-05-15 07:04:31.052866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.081 [2024-05-15 07:04:31.062146] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.082 [2024-05-15 07:04:31.062483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.062693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.062719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.082 [2024-05-15 07:04:31.062735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.082 [2024-05-15 07:04:31.062852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.082 [2024-05-15 07:04:31.063043] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.082 [2024-05-15 07:04:31.063064] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.082 [2024-05-15 07:04:31.063078] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.082 [2024-05-15 07:04:31.065015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.082 [2024-05-15 07:04:31.074438] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.082 [2024-05-15 07:04:31.074813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.075032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.075058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.082 [2024-05-15 07:04:31.075074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.082 [2024-05-15 07:04:31.075269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.082 [2024-05-15 07:04:31.075432] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.082 [2024-05-15 07:04:31.075452] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.082 [2024-05-15 07:04:31.075465] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.082 [2024-05-15 07:04:31.077553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.082 [2024-05-15 07:04:31.086921] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.082 [2024-05-15 07:04:31.087295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.087501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.087526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.082 [2024-05-15 07:04:31.087541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.082 [2024-05-15 07:04:31.087674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.082 [2024-05-15 07:04:31.087856] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.082 [2024-05-15 07:04:31.087876] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.082 [2024-05-15 07:04:31.087889] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.082 [2024-05-15 07:04:31.089923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.082 [2024-05-15 07:04:31.099353] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.082 [2024-05-15 07:04:31.099726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.099937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.099963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.082 [2024-05-15 07:04:31.099979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.082 [2024-05-15 07:04:31.100142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.082 [2024-05-15 07:04:31.100321] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.082 [2024-05-15 07:04:31.100341] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.082 [2024-05-15 07:04:31.100354] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.082 [2024-05-15 07:04:31.102456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.082 [2024-05-15 07:04:31.111630] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.082 [2024-05-15 07:04:31.111953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.112159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.112187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.082 [2024-05-15 07:04:31.112203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.082 [2024-05-15 07:04:31.112384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.082 [2024-05-15 07:04:31.112563] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.082 [2024-05-15 07:04:31.112582] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.082 [2024-05-15 07:04:31.112596] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.082 [2024-05-15 07:04:31.114734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.082 [2024-05-15 07:04:31.124100] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.082 [2024-05-15 07:04:31.124441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.124633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.124658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.082 [2024-05-15 07:04:31.124674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.082 [2024-05-15 07:04:31.124840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.082 [2024-05-15 07:04:31.125015] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.082 [2024-05-15 07:04:31.125036] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.082 [2024-05-15 07:04:31.125050] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.082 [2024-05-15 07:04:31.127011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.082 [2024-05-15 07:04:31.136372] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.082 [2024-05-15 07:04:31.136778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.136993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.137020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.082 [2024-05-15 07:04:31.137035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.082 [2024-05-15 07:04:31.137186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.082 [2024-05-15 07:04:31.137384] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.082 [2024-05-15 07:04:31.137404] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.082 [2024-05-15 07:04:31.137418] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.082 [2024-05-15 07:04:31.139437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.082 [2024-05-15 07:04:31.148739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.082 [2024-05-15 07:04:31.149143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.149320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.082 [2024-05-15 07:04:31.149345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.082 [2024-05-15 07:04:31.149360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.082 [2024-05-15 07:04:31.149476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.082 [2024-05-15 07:04:31.149642] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.082 [2024-05-15 07:04:31.149662] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.083 [2024-05-15 07:04:31.149676] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.083 [2024-05-15 07:04:31.151680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.083 [2024-05-15 07:04:31.160909] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.083 [2024-05-15 07:04:31.161293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.161501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.161531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.083 [2024-05-15 07:04:31.161548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.083 [2024-05-15 07:04:31.161713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.083 [2024-05-15 07:04:31.161893] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.083 [2024-05-15 07:04:31.161913] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.083 [2024-05-15 07:04:31.161927] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.083 [2024-05-15 07:04:31.164001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.083 [2024-05-15 07:04:31.173262] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.083 [2024-05-15 07:04:31.173681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.173883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.173908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.083 [2024-05-15 07:04:31.173924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.083 [2024-05-15 07:04:31.174131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.083 [2024-05-15 07:04:31.174294] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.083 [2024-05-15 07:04:31.174314] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.083 [2024-05-15 07:04:31.174327] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.083 [2024-05-15 07:04:31.176316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.083 [2024-05-15 07:04:31.185546] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.083 [2024-05-15 07:04:31.185905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.186138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.186163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.083 [2024-05-15 07:04:31.186179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.083 [2024-05-15 07:04:31.186296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.083 [2024-05-15 07:04:31.186431] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.083 [2024-05-15 07:04:31.186452] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.083 [2024-05-15 07:04:31.186465] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.083 [2024-05-15 07:04:31.188378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.083 [2024-05-15 07:04:31.197952] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.083 [2024-05-15 07:04:31.198341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.198524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.198549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.083 [2024-05-15 07:04:31.198569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.083 [2024-05-15 07:04:31.198733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.083 [2024-05-15 07:04:31.198864] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.083 [2024-05-15 07:04:31.198883] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.083 [2024-05-15 07:04:31.198897] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.083 [2024-05-15 07:04:31.200744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.083 [2024-05-15 07:04:31.210216] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.083 [2024-05-15 07:04:31.210568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.210743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.210769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.083 [2024-05-15 07:04:31.210785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.083 [2024-05-15 07:04:31.210942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.083 [2024-05-15 07:04:31.211098] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.083 [2024-05-15 07:04:31.211117] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.083 [2024-05-15 07:04:31.211131] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.083 [2024-05-15 07:04:31.213278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.083 [2024-05-15 07:04:31.222647] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.083 [2024-05-15 07:04:31.222979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.223175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.223200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.083 [2024-05-15 07:04:31.223216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.083 [2024-05-15 07:04:31.223350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.083 [2024-05-15 07:04:31.223531] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.083 [2024-05-15 07:04:31.223561] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.083 [2024-05-15 07:04:31.223574] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.083 [2024-05-15 07:04:31.225584] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.083 [2024-05-15 07:04:31.235055] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.083 [2024-05-15 07:04:31.235486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.235695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.235723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.083 [2024-05-15 07:04:31.235739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.083 [2024-05-15 07:04:31.235845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.083 [2024-05-15 07:04:31.236023] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.083 [2024-05-15 07:04:31.236044] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.083 [2024-05-15 07:04:31.236059] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.083 [2024-05-15 07:04:31.238321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.083 [2024-05-15 07:04:31.247318] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.083 [2024-05-15 07:04:31.247701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.247904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.247935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.083 [2024-05-15 07:04:31.247952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.083 [2024-05-15 07:04:31.248086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.083 [2024-05-15 07:04:31.248284] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.083 [2024-05-15 07:04:31.248304] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.083 [2024-05-15 07:04:31.248318] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.083 [2024-05-15 07:04:31.250442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.083 [2024-05-15 07:04:31.259613] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.083 [2024-05-15 07:04:31.259970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.260166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.083 [2024-05-15 07:04:31.260191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.083 [2024-05-15 07:04:31.260207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.083 [2024-05-15 07:04:31.260356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.083 [2024-05-15 07:04:31.260535] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.083 [2024-05-15 07:04:31.260555] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.084 [2024-05-15 07:04:31.260569] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.084 [2024-05-15 07:04:31.262541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.084 [2024-05-15 07:04:31.271862] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.084 [2024-05-15 07:04:31.272261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-05-15 07:04:31.272430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-05-15 07:04:31.272454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.084 [2024-05-15 07:04:31.272470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.084 [2024-05-15 07:04:31.272635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.084 [2024-05-15 07:04:31.272758] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.084 [2024-05-15 07:04:31.272779] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.084 [2024-05-15 07:04:31.272792] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.084 [2024-05-15 07:04:31.274821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.084 [2024-05-15 07:04:31.284350] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.084 [2024-05-15 07:04:31.284705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-05-15 07:04:31.284897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-05-15 07:04:31.284923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.084 [2024-05-15 07:04:31.284948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.084 [2024-05-15 07:04:31.285099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.084 [2024-05-15 07:04:31.285264] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.084 [2024-05-15 07:04:31.285284] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.084 [2024-05-15 07:04:31.285298] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.084 [2024-05-15 07:04:31.287352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.084 [2024-05-15 07:04:31.296653] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.084 [2024-05-15 07:04:31.297054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-05-15 07:04:31.297257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-05-15 07:04:31.297283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.084 [2024-05-15 07:04:31.297299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.084 [2024-05-15 07:04:31.297480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.084 [2024-05-15 07:04:31.297627] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.084 [2024-05-15 07:04:31.297647] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.084 [2024-05-15 07:04:31.297661] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.084 [2024-05-15 07:04:31.299764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.084 [2024-05-15 07:04:31.309064] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.084 [2024-05-15 07:04:31.309422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-05-15 07:04:31.309608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.084 [2024-05-15 07:04:31.309634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.084 [2024-05-15 07:04:31.309650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.084 [2024-05-15 07:04:31.309766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.084 [2024-05-15 07:04:31.309989] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.084 [2024-05-15 07:04:31.310010] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.084 [2024-05-15 07:04:31.310029] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.084 [2024-05-15 07:04:31.312073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.343 [2024-05-15 07:04:31.321436] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.344 [2024-05-15 07:04:31.321898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.322078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.322104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.344 [2024-05-15 07:04:31.322120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.344 [2024-05-15 07:04:31.322221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.344 [2024-05-15 07:04:31.322387] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.344 [2024-05-15 07:04:31.322408] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.344 [2024-05-15 07:04:31.322421] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.344 [2024-05-15 07:04:31.324406] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.344 [2024-05-15 07:04:31.333734] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.344 [2024-05-15 07:04:31.334088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.334267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.334294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.344 [2024-05-15 07:04:31.334310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.344 [2024-05-15 07:04:31.334475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.344 [2024-05-15 07:04:31.334591] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.344 [2024-05-15 07:04:31.334611] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.344 [2024-05-15 07:04:31.334625] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.344 [2024-05-15 07:04:31.336713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.344 [2024-05-15 07:04:31.346002] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.344 [2024-05-15 07:04:31.346385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.346579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.346605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.344 [2024-05-15 07:04:31.346620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.344 [2024-05-15 07:04:31.346783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.344 [2024-05-15 07:04:31.346915] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.344 [2024-05-15 07:04:31.346942] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.344 [2024-05-15 07:04:31.346957] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.344 [2024-05-15 07:04:31.349010] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.344 [2024-05-15 07:04:31.358370] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.344 [2024-05-15 07:04:31.358696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.358906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.358938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.344 [2024-05-15 07:04:31.358956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.344 [2024-05-15 07:04:31.359154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.344 [2024-05-15 07:04:31.359291] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.344 [2024-05-15 07:04:31.359312] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.344 [2024-05-15 07:04:31.359325] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.344 [2024-05-15 07:04:31.361270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.344 [2024-05-15 07:04:31.370798] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.344 [2024-05-15 07:04:31.371234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.371476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.371502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.344 [2024-05-15 07:04:31.371518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.344 [2024-05-15 07:04:31.371684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.344 [2024-05-15 07:04:31.371865] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.344 [2024-05-15 07:04:31.371885] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.344 [2024-05-15 07:04:31.371899] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.344 [2024-05-15 07:04:31.373850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.344 [2024-05-15 07:04:31.383228] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.344 [2024-05-15 07:04:31.383597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.383767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.383792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.344 [2024-05-15 07:04:31.383808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.344 [2024-05-15 07:04:31.383950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.344 [2024-05-15 07:04:31.384163] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.344 [2024-05-15 07:04:31.384184] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.344 [2024-05-15 07:04:31.384198] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.344 [2024-05-15 07:04:31.386260] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.344 [2024-05-15 07:04:31.395661] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.344 [2024-05-15 07:04:31.396049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.396231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.396256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.344 [2024-05-15 07:04:31.396272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.344 [2024-05-15 07:04:31.396372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.344 [2024-05-15 07:04:31.396535] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.344 [2024-05-15 07:04:31.396555] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.344 [2024-05-15 07:04:31.396568] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.344 [2024-05-15 07:04:31.398690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.344 [2024-05-15 07:04:31.407822] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.344 [2024-05-15 07:04:31.408181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.408383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.344 [2024-05-15 07:04:31.408409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.344 [2024-05-15 07:04:31.408425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.344 [2024-05-15 07:04:31.408574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.344 [2024-05-15 07:04:31.408770] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.344 [2024-05-15 07:04:31.408790] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.345 [2024-05-15 07:04:31.408804] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.345 [2024-05-15 07:04:31.410904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.345 [2024-05-15 07:04:31.420088] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.345 [2024-05-15 07:04:31.420449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.420629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.420654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.345 [2024-05-15 07:04:31.420670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.345 [2024-05-15 07:04:31.420819] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.345 [2024-05-15 07:04:31.420996] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.345 [2024-05-15 07:04:31.421017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.345 [2024-05-15 07:04:31.421031] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.345 [2024-05-15 07:04:31.423109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.345 [2024-05-15 07:04:31.432489] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.345 [2024-05-15 07:04:31.432856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.433041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.433068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.345 [2024-05-15 07:04:31.433084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.345 [2024-05-15 07:04:31.433264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.345 [2024-05-15 07:04:31.433428] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.345 [2024-05-15 07:04:31.433447] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.345 [2024-05-15 07:04:31.433461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.345 [2024-05-15 07:04:31.435492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.345 [2024-05-15 07:04:31.444739] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.345 [2024-05-15 07:04:31.445156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.445340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.445366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.345 [2024-05-15 07:04:31.445381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.345 [2024-05-15 07:04:31.445546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.345 [2024-05-15 07:04:31.445711] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.345 [2024-05-15 07:04:31.445732] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.345 [2024-05-15 07:04:31.445745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.345 [2024-05-15 07:04:31.447767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.345 [2024-05-15 07:04:31.457283] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.345 [2024-05-15 07:04:31.457670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.457867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.457892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.345 [2024-05-15 07:04:31.457908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.345 [2024-05-15 07:04:31.458031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.345 [2024-05-15 07:04:31.458198] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.345 [2024-05-15 07:04:31.458218] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.345 [2024-05-15 07:04:31.458231] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.345 [2024-05-15 07:04:31.460321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.345 [2024-05-15 07:04:31.469640] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.345 [2024-05-15 07:04:31.470026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.470237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.470263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.345 [2024-05-15 07:04:31.470278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.345 [2024-05-15 07:04:31.470444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.345 [2024-05-15 07:04:31.470563] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.345 [2024-05-15 07:04:31.470584] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.345 [2024-05-15 07:04:31.470601] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.345 [2024-05-15 07:04:31.472730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.345 [2024-05-15 07:04:31.481753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.345 [2024-05-15 07:04:31.482139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.482339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.482364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.345 [2024-05-15 07:04:31.482380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.345 [2024-05-15 07:04:31.482512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.345 [2024-05-15 07:04:31.482678] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.345 [2024-05-15 07:04:31.482698] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.345 [2024-05-15 07:04:31.482712] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.345 [2024-05-15 07:04:31.484885] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.345 [2024-05-15 07:04:31.494336] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.345 [2024-05-15 07:04:31.494656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.494859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.494885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.345 [2024-05-15 07:04:31.494900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.345 [2024-05-15 07:04:31.495026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.345 [2024-05-15 07:04:31.495179] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.345 [2024-05-15 07:04:31.495200] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.345 [2024-05-15 07:04:31.495214] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.345 [2024-05-15 07:04:31.497281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.345 [2024-05-15 07:04:31.506706] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.345 [2024-05-15 07:04:31.507039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.507213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.507239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.345 [2024-05-15 07:04:31.507259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.345 [2024-05-15 07:04:31.507442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.345 [2024-05-15 07:04:31.507590] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.345 [2024-05-15 07:04:31.507610] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.345 [2024-05-15 07:04:31.507624] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.345 [2024-05-15 07:04:31.509626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.345 [2024-05-15 07:04:31.519072] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.345 [2024-05-15 07:04:31.519443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.519669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.345 [2024-05-15 07:04:31.519694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.345 [2024-05-15 07:04:31.519709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.345 [2024-05-15 07:04:31.519842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.345 [2024-05-15 07:04:31.520033] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.345 [2024-05-15 07:04:31.520054] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.345 [2024-05-15 07:04:31.520067] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.345 [2024-05-15 07:04:31.522186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.346 [2024-05-15 07:04:31.531415] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.346 [2024-05-15 07:04:31.531802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.346 [2024-05-15 07:04:31.532003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.346 [2024-05-15 07:04:31.532029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.346 [2024-05-15 07:04:31.532045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.346 [2024-05-15 07:04:31.532209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.346 [2024-05-15 07:04:31.532390] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.346 [2024-05-15 07:04:31.532412] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.346 [2024-05-15 07:04:31.532425] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.346 [2024-05-15 07:04:31.534579] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.346 [2024-05-15 07:04:31.543773] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.346 [2024-05-15 07:04:31.544125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.346 [2024-05-15 07:04:31.544333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.346 [2024-05-15 07:04:31.544359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.346 [2024-05-15 07:04:31.544374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.346 [2024-05-15 07:04:31.544545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.346 [2024-05-15 07:04:31.544742] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.346 [2024-05-15 07:04:31.544763] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.346 [2024-05-15 07:04:31.544776] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.346 [2024-05-15 07:04:31.546800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.346 [2024-05-15 07:04:31.556144] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.346 [2024-05-15 07:04:31.556477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.346 [2024-05-15 07:04:31.556677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.346 [2024-05-15 07:04:31.556702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.346 [2024-05-15 07:04:31.556718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.346 [2024-05-15 07:04:31.556866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.346 [2024-05-15 07:04:31.557074] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.346 [2024-05-15 07:04:31.557096] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.346 [2024-05-15 07:04:31.557109] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.346 [2024-05-15 07:04:31.559199] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.346 [2024-05-15 07:04:31.568438] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.346 [2024-05-15 07:04:31.568796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.346 [2024-05-15 07:04:31.569012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.346 [2024-05-15 07:04:31.569039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.346 [2024-05-15 07:04:31.569055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.346 [2024-05-15 07:04:31.569173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.346 [2024-05-15 07:04:31.569338] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.346 [2024-05-15 07:04:31.569359] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.346 [2024-05-15 07:04:31.569373] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.346 [2024-05-15 07:04:31.571553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.605 [2024-05-15 07:04:31.580807] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.605 [2024-05-15 07:04:31.581144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.605 [2024-05-15 07:04:31.581382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.605 [2024-05-15 07:04:31.581408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.605 [2024-05-15 07:04:31.581423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.605 [2024-05-15 07:04:31.581573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.605 [2024-05-15 07:04:31.581778] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.605 [2024-05-15 07:04:31.581799] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.605 [2024-05-15 07:04:31.581813] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.605 [2024-05-15 07:04:31.584095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.605 [2024-05-15 07:04:31.593177] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.605 [2024-05-15 07:04:31.593559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.605 [2024-05-15 07:04:31.593735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.605 [2024-05-15 07:04:31.593760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.605 [2024-05-15 07:04:31.593776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.605 [2024-05-15 07:04:31.593965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.605 [2024-05-15 07:04:31.594130] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.605 [2024-05-15 07:04:31.594151] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.605 [2024-05-15 07:04:31.594164] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.605 [2024-05-15 07:04:31.596437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.605 [2024-05-15 07:04:31.605269] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.605 [2024-05-15 07:04:31.605628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.605 [2024-05-15 07:04:31.605827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.605 [2024-05-15 07:04:31.605853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.605 [2024-05-15 07:04:31.605868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.606 [2024-05-15 07:04:31.606055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.606 [2024-05-15 07:04:31.606203] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.606 [2024-05-15 07:04:31.606223] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.606 [2024-05-15 07:04:31.606237] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.606 [2024-05-15 07:04:31.608431] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.606 [2024-05-15 07:04:31.617596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.606 [2024-05-15 07:04:31.617971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.618174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.618200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.606 [2024-05-15 07:04:31.618215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.606 [2024-05-15 07:04:31.618364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.606 [2024-05-15 07:04:31.618527] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.606 [2024-05-15 07:04:31.618554] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.606 [2024-05-15 07:04:31.618570] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.606 [2024-05-15 07:04:31.620588] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.606 [2024-05-15 07:04:31.629863] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.606 [2024-05-15 07:04:31.630194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.630396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.630421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.606 [2024-05-15 07:04:31.630437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.606 [2024-05-15 07:04:31.630618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.606 [2024-05-15 07:04:31.630783] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.606 [2024-05-15 07:04:31.630803] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.606 [2024-05-15 07:04:31.630817] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.606 [2024-05-15 07:04:31.632885] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.606 [2024-05-15 07:04:31.642287] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.606 [2024-05-15 07:04:31.642666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.642874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.642899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.606 [2024-05-15 07:04:31.642914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.606 [2024-05-15 07:04:31.643086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.606 [2024-05-15 07:04:31.643269] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.606 [2024-05-15 07:04:31.643290] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.606 [2024-05-15 07:04:31.643304] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.606 [2024-05-15 07:04:31.645465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.606 07:04:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:17.606 07:04:31 -- common/autotest_common.sh@852 -- # return 0 00:27:17.606 07:04:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:17.606 07:04:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:17.606 07:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:17.606 [2024-05-15 07:04:31.654638] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.606 [2024-05-15 07:04:31.655027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.655234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.655260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.606 [2024-05-15 07:04:31.655276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.606 [2024-05-15 07:04:31.655458] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.606 [2024-05-15 07:04:31.655627] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.606 [2024-05-15 07:04:31.655647] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.606 [2024-05-15 07:04:31.655661] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.606 [2024-05-15 07:04:31.657810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.606 07:04:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.606 [2024-05-15 07:04:31.666867] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.606 07:04:31 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:17.606 07:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.606 07:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:17.606 [2024-05-15 07:04:31.667224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.667406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.667431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.606 [2024-05-15 07:04:31.667447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.606 [2024-05-15 07:04:31.667612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.606 [2024-05-15 07:04:31.667765] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.606 [2024-05-15 07:04:31.667785] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.606 [2024-05-15 07:04:31.667799] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.606 [2024-05-15 07:04:31.669988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.606 [2024-05-15 07:04:31.672414] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.606 07:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.606 07:04:31 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:17.606 07:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.606 07:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:17.606 [2024-05-15 07:04:31.679444] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.606 [2024-05-15 07:04:31.679862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.680082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.680109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.606 [2024-05-15 07:04:31.680125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.606 [2024-05-15 07:04:31.680273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.606 [2024-05-15 07:04:31.680439] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.606 [2024-05-15 07:04:31.680460] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.606 [2024-05-15 07:04:31.680473] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.606 [2024-05-15 07:04:31.682558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.606 [2024-05-15 07:04:31.691809] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.606 [2024-05-15 07:04:31.692214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.692435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.692460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.606 [2024-05-15 07:04:31.692478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.606 [2024-05-15 07:04:31.692640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.606 [2024-05-15 07:04:31.692784] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.606 [2024-05-15 07:04:31.692803] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.606 [2024-05-15 07:04:31.692816] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.606 [2024-05-15 07:04:31.694922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.606 [2024-05-15 07:04:31.704085] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.606 [2024-05-15 07:04:31.704676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.704909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.606 [2024-05-15 07:04:31.704953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.606 [2024-05-15 07:04:31.704972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.606 [2024-05-15 07:04:31.705116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.606 [2024-05-15 07:04:31.705274] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.606 [2024-05-15 07:04:31.705306] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.606 [2024-05-15 07:04:31.705321] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.606 [2024-05-15 07:04:31.707499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.606 Malloc0 00:27:17.606 07:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.606 07:04:31 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:17.606 07:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.606 07:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:17.607 [2024-05-15 07:04:31.716374] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.607 [2024-05-15 07:04:31.716775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.607 [2024-05-15 07:04:31.716985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.607 [2024-05-15 07:04:31.717012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.607 [2024-05-15 07:04:31.717029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.607 [2024-05-15 07:04:31.717217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.607 [2024-05-15 07:04:31.717372] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.607 [2024-05-15 07:04:31.717393] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.607 [2024-05-15 07:04:31.717407] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.607 [2024-05-15 07:04:31.719534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.607 07:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.607 07:04:31 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:17.607 07:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.607 07:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:17.607 [2024-05-15 07:04:31.728864] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.607 [2024-05-15 07:04:31.729230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.607 [2024-05-15 07:04:31.729462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.607 [2024-05-15 07:04:31.729488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x656400 with addr=10.0.0.2, port=4420 00:27:17.607 [2024-05-15 07:04:31.729508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x656400 is same with the state(5) to be set 00:27:17.607 [2024-05-15 07:04:31.729673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x656400 (9): Bad file descriptor 00:27:17.607 [2024-05-15 07:04:31.729842] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:17.607 [2024-05-15 07:04:31.729863] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:17.607 [2024-05-15 07:04:31.729876] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:17.607 07:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.607 07:04:31 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:17.607 07:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:17.607 07:04:31 -- common/autotest_common.sh@10 -- # set +x 00:27:17.607 [2024-05-15 07:04:31.731944] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:17.607 [2024-05-15 07:04:31.733826] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:17.607 07:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:17.607 07:04:31 -- host/bdevperf.sh@38 -- # wait 624461 00:27:17.607 [2024-05-15 07:04:31.741078] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:17.607 [2024-05-15 07:04:31.770157] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:27.572 00:27:27.572 Latency(us) 00:27:27.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.572 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:27.572 Verification LBA range: start 0x0 length 0x4000 00:27:27.572 Nvme1n1 : 15.01 9594.60 37.48 15201.02 0.00 5147.26 1377.47 22039.51 00:27:27.572 =================================================================================================================== 00:27:27.572 Total : 9594.60 37.48 15201.02 0.00 5147.26 1377.47 22039.51 00:27:27.572 07:04:40 -- host/bdevperf.sh@39 -- # sync 00:27:27.572 07:04:40 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:27.572 07:04:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:27.572 07:04:40 -- common/autotest_common.sh@10 -- # set +x 00:27:27.572 07:04:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:27.572 07:04:40 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:27.572 07:04:40 -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:27.572 07:04:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:27.572 07:04:40 -- nvmf/common.sh@116 -- # sync 00:27:27.572 07:04:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:27.572 07:04:40 -- nvmf/common.sh@119 -- # set +e 00:27:27.572 07:04:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:27.572 07:04:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:27.572 rmmod nvme_tcp 00:27:27.572 rmmod nvme_fabrics 00:27:27.572 rmmod nvme_keyring 00:27:27.572 07:04:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:27.572 07:04:40 -- nvmf/common.sh@123 -- # set -e 00:27:27.572 07:04:40 -- nvmf/common.sh@124 -- # return 0 00:27:27.572 07:04:40 -- nvmf/common.sh@477 -- # '[' -n 625269 ']' 00:27:27.572 07:04:40 -- nvmf/common.sh@478 -- # killprocess 625269 00:27:27.572 07:04:40 -- common/autotest_common.sh@926 -- # '[' -z 625269 ']' 00:27:27.572 07:04:40 -- common/autotest_common.sh@930 -- # kill -0 625269 00:27:27.572 07:04:40 -- common/autotest_common.sh@931 -- # uname 00:27:27.572 07:04:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:27.572 07:04:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 625269 00:27:27.572 07:04:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:27.572 07:04:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:27.572 07:04:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 625269' 00:27:27.572 killing process with pid 625269 00:27:27.572 07:04:40 -- common/autotest_common.sh@945 -- # kill 625269 00:27:27.572 07:04:40 -- common/autotest_common.sh@950 -- # wait 625269 00:27:27.572 07:04:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:27.572 07:04:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:27.572 07:04:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:27.572 07:04:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.572 07:04:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:27.572 07:04:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.572 07:04:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.572 07:04:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.950 07:04:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:28.950 00:27:28.950 real 0m23.734s 00:27:28.950 user 1m3.098s 00:27:28.950 sys 0m4.666s 00:27:28.950 07:04:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.950 07:04:42 -- common/autotest_common.sh@10 -- # set +x 00:27:28.950 ************************************ 00:27:28.950 END TEST nvmf_bdevperf 00:27:28.950 ************************************ 00:27:28.950 07:04:43 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:28.950 07:04:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:28.950 07:04:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:28.950 07:04:43 -- common/autotest_common.sh@10 -- # set +x 00:27:28.950 ************************************ 00:27:28.950 START TEST nvmf_target_disconnect 00:27:28.950 ************************************ 00:27:28.950 07:04:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:28.950 * Looking for test storage... 00:27:28.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:28.950 07:04:43 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.950 07:04:43 -- nvmf/common.sh@7 -- # uname -s 00:27:28.950 07:04:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.950 07:04:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.950 07:04:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.950 07:04:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.950 07:04:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.950 07:04:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.950 07:04:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.950 07:04:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.950 07:04:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.950 07:04:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.950 07:04:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.950 07:04:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.950 07:04:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.950 07:04:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.950 07:04:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.950 07:04:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.950 07:04:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.950 07:04:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.950 07:04:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.950 07:04:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.950 07:04:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.950 07:04:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.950 07:04:43 -- paths/export.sh@5 -- # export PATH 00:27:28.950 07:04:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.950 07:04:43 -- nvmf/common.sh@46 -- # : 0 00:27:28.950 07:04:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:28.950 07:04:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:28.950 07:04:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:28.950 07:04:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.950 07:04:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.950 07:04:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:28.950 07:04:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:28.950 07:04:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:28.950 07:04:43 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:28.950 07:04:43 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:28.950 07:04:43 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:28.950 07:04:43 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:27:28.951 07:04:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:28.951 07:04:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:28.951 07:04:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:28.951 07:04:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:28.951 07:04:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:28.951 07:04:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:28.951 07:04:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:28.951 07:04:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.951 07:04:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:28.951 07:04:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:28.951 07:04:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:28.951 07:04:43 -- common/autotest_common.sh@10 -- # set +x 00:27:31.480 07:04:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:31.480 07:04:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:31.480 07:04:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:31.480 07:04:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:31.480 07:04:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:31.480 07:04:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:31.480 07:04:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:31.480 07:04:45 -- nvmf/common.sh@294 -- # net_devs=() 00:27:31.480 07:04:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:31.480 07:04:45 -- nvmf/common.sh@295 -- # e810=() 00:27:31.480 07:04:45 -- nvmf/common.sh@295 -- # local -ga e810 00:27:31.480 07:04:45 -- nvmf/common.sh@296 -- # x722=() 00:27:31.480 07:04:45 -- nvmf/common.sh@296 -- # local -ga x722 00:27:31.480 07:04:45 -- nvmf/common.sh@297 -- # mlx=() 00:27:31.480 07:04:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:31.480 07:04:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:31.480 07:04:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:31.480 07:04:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:31.480 07:04:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:31.480 07:04:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:31.480 07:04:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:31.480 07:04:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:31.480 07:04:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:31.480 07:04:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:31.480 07:04:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:31.480 07:04:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:31.480 07:04:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:31.480 07:04:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:31.480 07:04:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:31.480 07:04:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:31.480 07:04:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:31.480 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:31.480 07:04:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:31.480 07:04:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:31.480 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:31.480 07:04:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:31.480 07:04:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:31.480 07:04:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.480 07:04:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:31.480 07:04:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.480 07:04:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:31.480 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:31.480 07:04:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.480 07:04:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:31.480 07:04:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:31.480 07:04:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:31.480 07:04:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:31.480 07:04:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:31.480 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:31.480 07:04:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:31.480 07:04:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:31.480 07:04:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:31.480 07:04:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:31.480 07:04:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:31.480 07:04:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.480 07:04:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.480 07:04:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:31.480 07:04:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:31.480 07:04:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:31.480 07:04:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:31.480 07:04:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:31.480 07:04:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:31.480 07:04:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.480 07:04:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:31.480 07:04:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:31.480 07:04:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:31.480 07:04:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.480 07:04:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.480 07:04:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.480 07:04:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:31.480 07:04:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.480 07:04:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.480 07:04:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.480 07:04:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:31.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:27:31.481 00:27:31.481 --- 10.0.0.2 ping statistics --- 00:27:31.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.481 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:31.481 07:04:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:27:31.481 00:27:31.481 --- 10.0.0.1 ping statistics --- 00:27:31.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.481 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:27:31.481 07:04:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.481 07:04:45 -- nvmf/common.sh@410 -- # return 0 00:27:31.481 07:04:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:31.481 07:04:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.481 07:04:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:31.481 07:04:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:31.481 07:04:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.481 07:04:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:31.481 07:04:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:31.481 07:04:45 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:31.481 07:04:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:31.481 07:04:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:31.481 07:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:31.481 ************************************ 00:27:31.481 START TEST nvmf_target_disconnect_tc1 00:27:31.481 ************************************ 00:27:31.481 07:04:45 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:27:31.481 07:04:45 -- host/target_disconnect.sh@32 -- # set +e 00:27:31.481 07:04:45 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:31.739 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.739 [2024-05-15 07:04:45.769882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.739 [2024-05-15 07:04:45.770146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.739 [2024-05-15 07:04:45.770180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc54920 with addr=10.0.0.2, port=4420 00:27:31.739 [2024-05-15 07:04:45.770213] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:31.739 [2024-05-15 07:04:45.770233] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:31.739 [2024-05-15 07:04:45.770248] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:31.739 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:31.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:31.739 Initializing NVMe Controllers 00:27:31.739 07:04:45 -- host/target_disconnect.sh@33 -- # trap - ERR 00:27:31.739 07:04:45 -- host/target_disconnect.sh@33 -- # print_backtrace 00:27:31.739 07:04:45 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:27:31.739 07:04:45 -- common/autotest_common.sh@1132 -- # return 0 00:27:31.739 07:04:45 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:27:31.739 07:04:45 -- host/target_disconnect.sh@41 -- # set -e 00:27:31.739 00:27:31.739 real 0m0.106s 00:27:31.739 user 0m0.044s 00:27:31.739 sys 0m0.061s 00:27:31.739 07:04:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.739 07:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:31.739 ************************************ 00:27:31.739 END TEST nvmf_target_disconnect_tc1 00:27:31.739 ************************************ 00:27:31.739 07:04:45 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:31.739 07:04:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:31.739 07:04:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:31.739 07:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:31.739 ************************************ 00:27:31.739 START TEST nvmf_target_disconnect_tc2 00:27:31.739 ************************************ 00:27:31.739 07:04:45 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:27:31.739 07:04:45 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:27:31.739 07:04:45 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:31.739 07:04:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:31.739 07:04:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:31.739 07:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:31.739 07:04:45 -- nvmf/common.sh@469 -- # nvmfpid=628755 00:27:31.739 07:04:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:31.739 07:04:45 -- nvmf/common.sh@470 -- # waitforlisten 628755 00:27:31.739 07:04:45 -- common/autotest_common.sh@819 -- # '[' -z 628755 ']' 00:27:31.740 07:04:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.740 07:04:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:31.740 07:04:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.740 07:04:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:31.740 07:04:45 -- common/autotest_common.sh@10 -- # set +x 00:27:31.740 [2024-05-15 07:04:45.861316] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:31.740 [2024-05-15 07:04:45.861399] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.740 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.740 [2024-05-15 07:04:45.934714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:31.998 [2024-05-15 07:04:46.036578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:31.998 [2024-05-15 07:04:46.036725] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.998 [2024-05-15 07:04:46.036747] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.998 [2024-05-15 07:04:46.036765] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.998 [2024-05-15 07:04:46.036862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:31.998 [2024-05-15 07:04:46.036920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:31.998 [2024-05-15 07:04:46.037053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:27:31.998 [2024-05-15 07:04:46.037062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:32.563 07:04:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:32.564 07:04:46 -- common/autotest_common.sh@852 -- # return 0 00:27:32.564 07:04:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:32.564 07:04:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:32.564 07:04:46 -- common/autotest_common.sh@10 -- # set +x 00:27:32.821 07:04:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.822 07:04:46 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:32.822 07:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.822 07:04:46 -- common/autotest_common.sh@10 -- # set +x 00:27:32.822 Malloc0 00:27:32.822 07:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.822 07:04:46 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:32.822 07:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.822 07:04:46 -- common/autotest_common.sh@10 -- # set +x 00:27:32.822 [2024-05-15 07:04:46.830411] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:32.822 07:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.822 07:04:46 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:32.822 07:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.822 07:04:46 -- common/autotest_common.sh@10 -- # set +x 00:27:32.822 07:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.822 07:04:46 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:32.822 07:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.822 07:04:46 -- common/autotest_common.sh@10 -- # set +x 00:27:32.822 07:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.822 07:04:46 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:32.822 07:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.822 07:04:46 -- common/autotest_common.sh@10 -- # set +x 00:27:32.822 [2024-05-15 07:04:46.858665] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:32.822 07:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.822 07:04:46 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:32.822 07:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:32.822 07:04:46 -- common/autotest_common.sh@10 -- # set +x 00:27:32.822 07:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:32.822 07:04:46 -- host/target_disconnect.sh@50 -- # reconnectpid=628914 00:27:32.822 07:04:46 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:32.822 07:04:46 -- host/target_disconnect.sh@52 -- # sleep 2 00:27:32.822 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.728 07:04:48 -- host/target_disconnect.sh@53 -- # kill -9 628755 00:27:34.728 07:04:48 -- host/target_disconnect.sh@55 -- # sleep 2 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Write completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Write completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Write completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Write completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Write completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Write completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Write completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Write completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Write completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.728 starting I/O failed 00:27:34.728 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 [2024-05-15 07:04:48.885005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 [2024-05-15 07:04:48.885313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 [2024-05-15 07:04:48.885690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Write completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 Read completed with error (sct=0, sc=8) 00:27:34.729 starting I/O failed 00:27:34.729 [2024-05-15 07:04:48.886053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.729 [2024-05-15 07:04:48.886262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-05-15 07:04:48.886496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-05-15 07:04:48.886523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-05-15 07:04:48.886754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-05-15 07:04:48.886961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-05-15 07:04:48.886987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-05-15 07:04:48.887166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-05-15 07:04:48.887365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-05-15 07:04:48.887390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-05-15 07:04:48.887621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-05-15 07:04:48.887817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-05-15 07:04:48.887841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-05-15 07:04:48.888036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-05-15 07:04:48.888210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-05-15 07:04:48.888243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.729 qpair failed and we were unable to recover it. 00:27:34.729 [2024-05-15 07:04:48.888452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.729 [2024-05-15 07:04:48.888646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.888670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.888852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.889030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.889055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.889238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.889424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.889448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.889660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.889835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.889859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.890077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.890266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.890290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.890487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.890685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.890710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.890902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.891097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.891124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.891308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.891521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.891545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.891746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.891913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.891948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.892132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.892333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.892358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.892557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.892757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.892783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.892961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.893141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.893166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.893348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.893579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.893604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.893831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.894012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.894038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.894218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.894461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.894496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.894666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.894837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.894864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.895041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.895246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.895272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.895488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.895699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.895723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.895892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.896070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.896095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.896273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.896477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.896501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.896724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.896964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.896989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.897173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.897436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.897460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.897665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.897959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.897986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.898169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.898394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.898419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.898653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.898826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.898851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.899096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.899286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.899312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.899526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.899701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.899740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.899966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.900142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.900167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.900339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.900504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.900528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.730 qpair failed and we were unable to recover it. 00:27:34.730 [2024-05-15 07:04:48.900728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.730 [2024-05-15 07:04:48.900893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.900922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.901138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.901332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.901356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.901561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.901763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.901787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.901968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.902145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.902172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.902365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.902551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.902574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.902791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.902966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.902992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.903169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.903467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.903491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.903729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.903933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.903958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.904193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.904393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.904418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.904591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.904788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.904814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.905027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.905232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.905257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.905461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.905660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.905685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.905866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.906090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.906115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.906291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.906486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.906510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.906692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.906911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.906939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.907166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.907365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.907391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.907618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.907788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.907812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.908010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.908206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.908232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.908433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.908634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.908658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.908944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.909221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.909245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.909480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.909702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.909726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.909940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.910141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.910165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.910387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.910587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.910611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.910815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.911046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.911071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.911236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.911444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.911468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.911702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.911941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.911966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.912173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.912410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.912437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.912657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.912905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.912946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.913171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.913451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.913478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.731 qpair failed and we were unable to recover it. 00:27:34.731 [2024-05-15 07:04:48.913702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.731 [2024-05-15 07:04:48.913897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.913926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.914158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.914418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.914445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.914698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.914864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.914890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.915102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.915310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.915334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.915524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.915777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.915804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.916005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.916209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.916236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.916438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.916638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.916662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.916841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.917090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.917118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.917342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.917532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.917559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.917755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.917971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.917998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.918225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.918455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.918479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.918690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.918950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.918975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.919146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.919471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.919519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.919740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.919955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.919984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.920184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.920425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.920452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.920699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.920958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.920986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.921204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.921422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.921446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.921609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.921806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.921830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.922033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.922253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.922280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.922487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.922658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.922697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.922953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.923207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.923232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.923425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.923648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.923672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.923883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.924139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.924172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.924365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.924612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.924639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.924875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.925074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.925099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.925328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.925558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.925585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.925813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.926019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.926044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.926219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.926456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.926479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.926684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.926876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.732 [2024-05-15 07:04:48.926900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.732 qpair failed and we were unable to recover it. 00:27:34.732 [2024-05-15 07:04:48.927157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.927351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.927376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.927601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.927809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.927832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.928036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.928228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.928258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.928511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.928769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.928801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.929002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.929224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.929251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.929497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.929686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.929715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.929970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.930164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.930189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.930382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.930583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.930610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.930801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.931008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.931036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.931274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.931443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.931467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.931670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.931842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.931866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.932087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.932298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.932324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.932521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.932743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.932768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.932992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.933229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.933271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.933529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.933729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.933753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.933986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.934211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.934238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.934442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.934690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.934716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.934949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.935127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.935151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.935354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.935568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.935594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.935817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.936043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.936071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.936259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.936460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.936484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.936659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.936856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.936882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.733 [2024-05-15 07:04:48.937058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.937258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.733 [2024-05-15 07:04:48.937282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.733 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.937460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.937706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.937733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.937939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.938163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.938189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.938409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.938665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.938689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.938891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.939130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.939158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.939374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.939633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.939660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.939842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.940029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.940057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.940282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.940485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.940510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.940717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.940908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.940942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.941164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.941372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.941396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.941592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.941813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.941842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.942066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.942391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.942440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.942665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.942869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.942894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.943145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.943496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.943543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.943774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.943974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.943998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.944200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.944454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.944482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.944707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.944947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.944975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.945216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.945440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.945465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.945692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.945888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.945915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.946123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.946292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.946316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.946492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.946690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.946716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.946907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.947167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.947193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.947420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.947659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.947691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.947913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.948142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.948170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.948375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.948572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.948596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.948822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.949016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.949044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.949268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.949524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.949551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.949781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.950010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.950035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.950263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.950640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.950695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.734 qpair failed and we were unable to recover it. 00:27:34.734 [2024-05-15 07:04:48.950920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.734 [2024-05-15 07:04:48.951111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.951138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.951350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.951591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.951615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.951785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.952008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.952033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.952252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.952546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.952578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.952811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.953040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.953065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.953269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.953464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.953493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.953688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.953908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.953940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.954161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.954407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.954434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.954678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.954905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.954942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.955152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.955348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.955373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.955553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.955780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.955808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.956020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.956221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.956245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.956472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.956670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.956729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.956958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.957191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.957216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.957437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.957612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.957636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.957839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.958072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.958100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.958342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.958653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.958712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:34.735 [2024-05-15 07:04:48.958940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.959143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.735 [2024-05-15 07:04:48.959168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:34.735 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.959342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.959574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.959598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.959774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.959977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.960007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.960237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.960431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.960458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.960726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.960968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.960994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.961190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.961364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.961388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.961585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.961760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.961786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.962026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.962295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.962320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.962520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.962771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.962795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.963005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.963200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.963224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.963427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.963618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.963642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.963864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.964089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.964117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.964348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.964696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.964754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.964972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.965175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.965204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.965403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.965590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.965619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.965846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.966049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.966074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.966299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.966636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.966698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.966912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.967146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.967170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.006 qpair failed and we were unable to recover it. 00:27:35.006 [2024-05-15 07:04:48.967396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.006 [2024-05-15 07:04:48.967570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.967596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.967801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.967978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.968004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.968238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.968463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.968487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.968711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.968943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.968971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.969158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.969411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.969463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.969710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.969888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.969912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.970121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.970321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.970346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.970572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.970765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.970792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.971014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.971248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.971273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.971475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.971643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.971668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.971872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.972077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.972103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.972309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.972514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.972538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.972709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.972914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.972954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.973181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.973382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.973407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.973643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.973891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.973919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.974133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.974344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.974368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.974570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.974890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.974918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.975185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.975557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.975605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.975884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.976082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.976107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.976299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.976599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.976627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.976898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.977111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.977136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.977343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.977549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.977573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.977789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.977992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.978017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.978251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.978454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.978479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.978716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.978942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.978972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.007 qpair failed and we were unable to recover it. 00:27:35.007 [2024-05-15 07:04:48.979159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.979404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.007 [2024-05-15 07:04:48.979428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.979727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.979953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.979978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.980241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.980430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.980453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.980667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.980892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.980916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.981143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.981346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.981370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.981576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.981760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.981784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.981990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.982226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.982253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.982453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.982664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.982692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.982925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.983136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.983160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.983382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.983629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.983656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.983883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.984056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.984081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.984253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.984454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.984478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.984677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.984924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.984970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.985202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.985552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.985601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.985846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.986068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.986096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.986326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.986522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.986547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.986741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.986946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.986971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.987170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.987403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.987428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.987655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.987887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.987911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.988144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.988354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.988383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.988607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.988804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.988831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.989024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.989349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.989404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.989629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.989820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.989845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.990052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.990281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.990321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.990514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.990902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.990957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.008 qpair failed and we were unable to recover it. 00:27:35.008 [2024-05-15 07:04:48.991208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.991440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.008 [2024-05-15 07:04:48.991464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.991668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.991845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.991869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.992097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.992317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.992344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.992568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.992763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.992790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.992998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.993163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.993187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.993369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.993571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.993596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.993789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.994085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.994139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.994360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.994752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.994803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.995027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.995268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.995292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.995522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.995716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.995743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.995940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.996144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.996169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.996402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.996629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.996653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.996858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.997108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.997133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.997332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.997552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.997579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.997805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.998003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.998029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.998203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.998416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.998443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.998642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.998868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.998893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.999114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.999365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.999390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:48.999617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.999852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:48.999879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:49.000109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.000311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.000335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:49.000580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.000829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.000858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:49.001060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.001261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.001285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:49.001498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.001693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.001720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:49.001942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.002168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.002193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:49.002400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.002603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.002628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:49.002827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.003052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.003081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.009 qpair failed and we were unable to recover it. 00:27:35.009 [2024-05-15 07:04:49.003305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.009 [2024-05-15 07:04:49.003493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.003518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.003775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.004010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.004037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.004267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.004629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.004679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.004908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.005110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.005137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.005377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.005574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.005602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.005818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.006024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.006049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.006223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.006479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.006529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.006741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.006970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.006998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.007235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.007434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.007461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.007694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.007947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.007975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.008209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.008399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.008423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.008648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.008844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.008871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.009058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.009291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.009316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.009494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.009688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.009712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.009918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.010151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.010180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.010445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.010633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.010660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.010872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.011126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.011151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.011392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.011605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.011630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.011856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.012059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.012086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.012283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.012663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.012727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.012996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.013201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.013226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.013462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.013683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.013710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.013956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.014173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.014200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.014401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.014854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.014905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.015110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.015308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.015335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.015576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.015803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.010 [2024-05-15 07:04:49.015827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.010 qpair failed and we were unable to recover it. 00:27:35.010 [2024-05-15 07:04:49.016063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.016281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.016304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.016548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.016719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.016743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.016956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.017174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.017201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.017458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.017659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.017684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.017885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.018145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.018185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.018410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.018618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.018647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.018842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.019099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.019126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.019383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.019632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.019659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.019909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.020116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.020141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.020343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.020529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.020555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.020796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.021047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.021072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.021301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.021524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.021551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.021774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.022001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.022028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.022222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.022439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.022462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.022693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.022950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.022975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.023259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.023569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.023596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.023926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.024208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.024232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.024436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.024663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.024687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.024884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.025090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.025115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.025326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.025544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.025578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.011 qpair failed and we were unable to recover it. 00:27:35.011 [2024-05-15 07:04:49.025830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.026087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.011 [2024-05-15 07:04:49.026112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.026282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.026474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.026503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.026754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.026945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.026973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.027191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.027419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.027443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.027670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.027922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.027955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.028145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.028388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.028412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.028603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.028788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.028812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.029023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.029255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.029283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.029486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.029723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.029749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.029993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.030293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.030325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.030577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.030777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.030804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.031027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.031245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.031274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.031491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.031823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.031876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.032084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.032266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.032290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.032498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.032707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.032734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.032972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.033176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.033204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.033416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.033611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.033635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.033840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.034036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.034061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.034223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.034417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.034441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.034667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.034892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.034916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.035127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.035295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.035319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.035545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.035748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.035781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.036000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.036199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.036224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.036448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.036627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.036651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.036850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.037053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.037079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.037309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.037509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.037535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.037761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.037970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.012 [2024-05-15 07:04:49.037996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.012 qpair failed and we were unable to recover it. 00:27:35.012 [2024-05-15 07:04:49.038226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.038421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.038445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.038652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.038846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.038870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.039062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.039226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.039252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.039462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.039688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.039713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.039891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.040069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.040094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.040285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.040490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.040515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.040720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.040913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.040944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.041173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.041366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.041391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.041561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.041747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.041771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.041982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.042211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.042235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.042529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.042768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.042792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.043034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.043251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.043276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.043500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.043696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.043720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.043900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.044126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.044151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.044354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.044566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.044590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.044773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.044981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.045006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.045257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.045458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.045484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.045653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.045873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.045897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.046116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.046328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.046351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.046550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.046766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.046790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.046990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.047274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.047297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.047482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.047677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.047701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.047905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.048114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.013 [2024-05-15 07:04:49.048140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.013 qpair failed and we were unable to recover it. 00:27:35.013 [2024-05-15 07:04:49.048327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.048520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.048544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.048750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.048954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.048979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.049158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.049351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.049376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.049601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.049826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.049851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.050055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.050249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.050273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.050504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.050707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.050731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.050928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.051108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.051132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.051307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.051581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.051605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.051809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.051976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.052001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.052230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.052410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.052434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.052712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.052963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.052992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.053170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.053371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.053395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.053611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.053798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.053822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.054027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.054273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.054297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.054525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.054714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.054738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.054950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.055130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.055155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.055364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.055558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.055583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.055812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.056058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.056082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.056277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.056507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.056532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.056707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.057043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.057068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.057347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.057586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.057611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.057894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.058117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.058142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.058326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.058533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.058558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.058824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.059044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.059069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.014 qpair failed and we were unable to recover it. 00:27:35.014 [2024-05-15 07:04:49.059259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.014 [2024-05-15 07:04:49.059438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.059461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.059657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.059873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.059897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.060114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.060346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.060370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.060594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.060832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.060856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.061069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.061275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.061300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.061528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.061718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.061741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.061954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.062153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.062178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.062386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.062586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.062609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.062863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.063089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.063114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.063317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.063518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.063544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.063751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.063914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.063944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.064112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.064277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.064302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.064489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.064700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.064725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.064925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.065121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.065145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.065412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.065664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.065688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.065891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.066127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.066152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.066359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.066585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.066609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.066818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.067076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.067102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.067308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.067536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.067560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.067761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.068050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.068075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.068278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.068493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.068516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.068757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.068959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.068984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.069177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.069353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.069377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.069590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.069799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.069824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.015 qpair failed and we were unable to recover it. 00:27:35.015 [2024-05-15 07:04:49.070019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.015 [2024-05-15 07:04:49.070222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.070246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.070412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.070645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.070669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.070865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.071077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.071103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.071370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.071551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.071576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.071797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.072003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.072029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.072198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.072445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.072470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.072690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.072910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.072948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.073285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.073528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.073552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.073843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.074054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.074079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.074258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.074475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.074498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.074689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.075084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.075127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.075422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.075648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.075673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.075872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.076076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.076099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.076325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.076523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.076551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.076715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.076941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.076966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.077209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.077414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.077439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.077673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.077840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.077865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.078053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.078249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.078274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.078445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.078743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.078765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.079049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.079268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.079292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.079461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.079678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.079702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.079911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.080094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.080118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.016 qpair failed and we were unable to recover it. 00:27:35.016 [2024-05-15 07:04:49.080308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.016 [2024-05-15 07:04:49.080526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.080550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.080764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.080993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.081019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.081277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.081504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.081528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.081725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.082016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.082040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.082235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.082544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.082582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.082830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.083035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.083062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.083262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.083463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.083488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.083748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.083974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.083999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.084175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.084394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.084419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.084609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.084806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.084830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.085053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.085275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.085298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.085543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.085735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.085759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.085990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.086206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.086230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.086392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.086602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.086626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.086867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.087094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.087119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.087323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.087521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.087545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.087723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.087936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.087960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.088159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.088343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.088367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.088577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.088752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.088777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.088952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.089178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.089203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.089399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.089625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.089649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.089913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.090121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.090145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.090349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.090571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.090595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.017 qpair failed and we were unable to recover it. 00:27:35.017 [2024-05-15 07:04:49.090795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.090988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.017 [2024-05-15 07:04:49.091014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.091220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.091442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.091467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.091682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.091912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.091942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.092124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.092340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.092363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.092553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.092737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.092761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.092971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.093152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.093175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.093412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.093622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.093645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.093865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.094038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.094078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.094286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.094483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.094507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.094707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.094904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.094927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.095152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.095382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.095406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.095680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.095906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.095935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.096116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.096323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.096347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.096551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.096772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.096797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.097000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.097202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.097226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.097434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.097657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.097681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.097906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.098140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.098165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.098353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.098581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.098605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.098807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.099038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.099069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.099241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.099468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.099496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.099705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.099903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.099927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.100148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.100338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.100363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.100529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.100726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.100750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.100965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.101237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.101262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.101469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.101669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.101693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.101864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.102062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.102087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.102278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.102471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.102495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.102724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.102908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.102936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.018 qpair failed and we were unable to recover it. 00:27:35.018 [2024-05-15 07:04:49.103145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.103335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.018 [2024-05-15 07:04:49.103359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.103626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.103792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.103821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.104057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.104255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.104279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.104487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.104714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.104739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.104999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.105169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.105194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.105424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.105590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.105614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.105820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.106056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.106081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.106311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.106536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.106560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.106763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.106956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.106982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.107219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.107420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.107444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.107672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.107894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.107919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.108157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.108366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.108389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.108605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.108818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.108841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.109030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.109236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.109260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.109488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.109719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.109743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.109948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.110140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.110164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.110401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.110614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.110639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.110825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.111053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.111077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.111258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.111472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.111495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.111728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.111994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.112018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.112198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.112423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.112447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.112615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.112782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.112822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.113047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.113211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.113234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.113441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.113645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.019 [2024-05-15 07:04:49.113669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.019 qpair failed and we were unable to recover it. 00:27:35.019 [2024-05-15 07:04:49.113869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.114114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.114153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.114393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.114624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.114648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.114912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.115097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.115121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.115418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.115649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.115673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.115902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.116110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.116135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.116323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.116548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.116571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.116808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.117008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.117033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.117326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.117523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.117546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.117785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.117963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.117988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.118184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.118404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.118427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.118633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.118843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.118866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.119101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.119293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.119318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.119529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.119744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.119768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.119951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.120178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.120203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.120368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.120529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.120570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.120782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.120995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.121020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.121236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.121431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.121456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.121678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.121874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.121899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.122097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.122306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.122331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.122562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.122756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.122780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.122979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.123175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.123201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.123429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.123639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.123664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.123924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.124138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.124162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.124342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.124555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.124579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.124799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.125044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.125085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.125320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.125537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.020 [2024-05-15 07:04:49.125561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.020 qpair failed and we were unable to recover it. 00:27:35.020 [2024-05-15 07:04:49.125738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.125942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.125967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.126169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.126412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.126436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.126669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.126844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.126872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.127048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.127223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.127248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.127442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.127659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.127682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.127889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.128100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.128124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.128321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.128539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.128563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.128777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.128976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.129002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.129208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.129405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.129429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.129602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.129842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.129865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.130095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.130293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.130317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.130596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.130829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.130852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.131042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.131213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.131255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.131488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.131726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.131751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.131954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.132157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.132181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.132406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.132608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.132632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.132862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.133041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.133066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.133342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.133633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.133658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.133887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.134102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.134127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.134305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.134503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.134527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.134708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.134923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.134953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.135133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.135350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.135376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.135608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.135804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.135828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.136032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.136239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.136263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.136488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.136691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.136716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.021 qpair failed and we were unable to recover it. 00:27:35.021 [2024-05-15 07:04:49.136941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.137141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.021 [2024-05-15 07:04:49.137165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.137363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.137584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.137609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.137814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.138044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.138071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.138291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.138481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.138506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.138702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.138914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.138947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.139134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.139333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.139358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.139556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.139730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.139755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.139989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.140191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.140215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.140440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.140609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.140634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.140798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.140959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.140984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.141186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.141471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.141496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.141769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.141954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.141979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.142206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.142385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.142410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.142583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.142818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.142844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.143043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.143253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.143280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.143506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.143710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.143736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.143962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.144166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.144191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.144420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.144618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.144643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.144844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.145047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.145072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.145238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.145436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.022 [2024-05-15 07:04:49.145461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.022 qpair failed and we were unable to recover it. 00:27:35.022 [2024-05-15 07:04:49.145629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.145830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.145855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.146047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.146225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.146249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.146455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.146630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.146655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.146855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.147027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.147063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.147242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.147429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.147456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.147685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.147859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.147884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.148063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.148281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.148309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.148568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.148832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.148884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.149178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.149395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.149425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.149660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.149888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.149912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.150131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.150369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.150417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.150638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.150870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.150895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.151074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.151248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.151273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.151456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.151680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.151731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.151979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.152178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.152206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.152432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.152695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.152758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.153018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.153208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.153236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.153456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.153739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.153795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.154018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.154214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.154243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.154477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.154680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.154709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.154912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.155127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.155153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.155362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.155585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.155618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.155843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.156073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.156098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.156329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.156642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.156692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.156919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.157150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.157177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.157406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.157650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.157676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.157890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.158098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.158126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.158314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.158525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.023 [2024-05-15 07:04:49.158552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.023 qpair failed and we were unable to recover it. 00:27:35.023 [2024-05-15 07:04:49.158905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.159139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.159171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.159434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.159641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.159667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.159891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.160098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.160127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.160338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.160553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.160580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.160848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.161054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.161079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.161280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.161513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.161541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.161730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.161983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.162011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.162219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.162469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.162517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.162846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.163056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.163082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.163280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.163512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.163540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.163759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.164019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.164044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.164245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.164462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.164492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.164721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.164954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.164984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.165198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.165431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.165458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.165701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.165920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.165966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.166168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.166391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.166421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.166624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.166846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.166872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.167067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.167252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.167280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.167476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.167696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.167720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.167921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.168145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.168171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.168401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.168633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.168660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.168907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.169113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.169138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.169355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.169586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.169610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.024 qpair failed and we were unable to recover it. 00:27:35.024 [2024-05-15 07:04:49.169779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.169971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.024 [2024-05-15 07:04:49.170000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.170196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.170409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.170436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.170663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.170895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.170923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.171143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.171375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.171402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.171653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.171824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.171849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.172021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.172194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.172219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.172413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.172611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.172638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.172825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.173050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.173076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.173300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.173536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.173560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.173821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.174101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.174126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.174331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.174553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.174580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.174770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.174984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.175009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.175228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.175421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.175448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.175641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.175839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.175867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.176098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.176295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.176319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.176547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.176735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.176763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.176989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.177193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.177236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.177464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.177683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.177710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.177903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.178122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.178147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.178378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.178566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.178593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.178835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.179071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.179096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.179349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.179664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.179694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.179909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.180095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.180127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.180360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.180604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.180631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.180852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.181110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.181136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.181399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.181667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.181712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.025 [2024-05-15 07:04:49.181918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.182124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.025 [2024-05-15 07:04:49.182148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.025 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.182383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.182629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.182656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.182880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.183140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.183169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.183411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.183636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.183663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.183861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.184123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.184151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.184344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.184617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.184665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.184882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.185118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.185143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.185320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.185540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.185567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.185826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.186054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.186081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.186311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.186536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.186561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.186824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.187076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.187104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.187344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.187564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.187590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.187787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.188019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.188055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.188252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.188468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.188496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.188722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.188911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.188954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.189200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.189402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.189427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.189605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.189837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.189865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.190061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.190266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.190291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.190546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.190759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.190786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.190999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.191204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.191233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.191433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.191599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.191622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.191798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.191994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.192023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.192216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.192413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.192445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.192662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.192867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.192891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.193082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.193307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.193336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.193538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.193756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.193783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.193981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.194206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.026 [2024-05-15 07:04:49.194234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.026 qpair failed and we were unable to recover it. 00:27:35.026 [2024-05-15 07:04:49.194455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-05-15 07:04:49.194654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-05-15 07:04:49.194678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-05-15 07:04:49.194852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-05-15 07:04:49.195050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-05-15 07:04:49.195080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-05-15 07:04:49.195271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-05-15 07:04:49.195522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-05-15 07:04:49.195549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-05-15 07:04:49.195774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-05-15 07:04:49.196003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-05-15 07:04:49.196031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.027 qpair failed and we were unable to recover it. 00:27:35.027 [2024-05-15 07:04:49.196228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.027 [2024-05-15 07:04:49.196456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.196483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.196709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.196964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.196996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.197185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.197364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.197389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.197563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.197762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.197786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.198014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.198240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.198267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.198462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.198660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.198684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.198859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.199036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.199062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.199287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.199476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.199506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.199727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.199924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.199968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.200195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.200398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.200425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.200672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.200918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.200952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.201149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.201365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.201398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.201628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.201843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.201870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.202066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.202249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.202276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.202466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.202656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.202683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.202901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.203127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.203154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.203375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.203623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.203650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.203856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.204046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.204074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.028 qpair failed and we were unable to recover it. 00:27:35.028 [2024-05-15 07:04:49.204291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.028 [2024-05-15 07:04:49.204513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.204540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.204763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.204991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.205016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.205187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.205387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.205412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.205620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.205792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.205816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.206050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.206262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.206286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.206464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.206668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.206693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.206934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.207123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.207151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.207358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.207550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.207579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.207826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.208039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.208067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.208289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.208505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.208533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.208783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.208984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.209011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.209233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.209436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.209463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.209648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.209841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.209871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.210071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.210293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.210321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.210540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.210763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.210788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.210989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.211186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.211216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.211419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.211611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.211639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.211858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.212088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.212116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.212312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.212536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.212561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.212763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.212992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.213021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.213220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.213435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.213462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.213709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.213925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.213959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.214152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.214352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.214379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.214603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.214773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.214798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.215025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.215278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.215305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.215522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.215744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.215771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.215969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.216160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.029 [2024-05-15 07:04:49.216187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.029 qpair failed and we were unable to recover it. 00:27:35.029 [2024-05-15 07:04:49.216374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.216582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.216606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.216825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.217017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.217045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.217238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.217430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.217457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.217653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.217874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.217901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.218134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.218385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.218413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.218605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.218784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.218812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.219038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.219233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.219260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.219472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.219694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.219721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.219945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.220175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.220202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.220423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.220597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.220622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.220848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.221071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.221096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.221331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.221522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.221549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.221772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.221994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.222023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.222251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.222436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.222464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.222683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.222915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.222950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.223156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.223349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.223377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.223567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.223769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.223794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.224040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.224231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.224271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.224481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.224710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.224747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.224956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.225159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.225188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.225409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.225607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.225636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.225829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.226051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.226081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.226282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.226523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.226549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.226774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.226994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.227024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.227245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.227439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.227468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.227649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.227850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.227874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.030 [2024-05-15 07:04:49.228073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.228291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.030 [2024-05-15 07:04:49.228319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.030 qpair failed and we were unable to recover it. 00:27:35.307 [2024-05-15 07:04:49.228563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.228798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.228845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.307 qpair failed and we were unable to recover it. 00:27:35.307 [2024-05-15 07:04:49.229058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.229239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.229265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.307 qpair failed and we were unable to recover it. 00:27:35.307 [2024-05-15 07:04:49.229483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.229708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.229738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.307 qpair failed and we were unable to recover it. 00:27:35.307 [2024-05-15 07:04:49.229980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.230163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.230189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.307 qpair failed and we were unable to recover it. 00:27:35.307 [2024-05-15 07:04:49.230391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.230588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.230617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.307 qpair failed and we were unable to recover it. 00:27:35.307 [2024-05-15 07:04:49.230838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.231067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.231095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.307 qpair failed and we were unable to recover it. 00:27:35.307 [2024-05-15 07:04:49.231288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.231485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.231513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.307 qpair failed and we were unable to recover it. 00:27:35.307 [2024-05-15 07:04:49.231734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.231947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.231975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.307 qpair failed and we were unable to recover it. 00:27:35.307 [2024-05-15 07:04:49.232177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.232433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.232461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.307 qpair failed and we were unable to recover it. 00:27:35.307 [2024-05-15 07:04:49.232665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.232893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.307 [2024-05-15 07:04:49.232921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.233155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.233367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.233395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.233593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.233841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.233866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.234045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.234256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.234288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.234488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.234690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.234717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.234898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.235107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.235135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.235341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.235566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.235592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.235803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.235988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.236014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.236189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.236407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.236434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.236651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.236851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.236876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.237126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.237342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.237370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.237567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.237770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.237796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.238009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.238228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.238256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.238444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.238663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.238691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.238910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.239135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.239165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.239349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.239600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.239628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.239835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.240038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.240068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.240285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.240471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.240500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.240705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.240906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.240939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.241137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.241392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.241418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.241609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.241802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.241833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.242093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.242300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.242331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.242559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.242782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.242812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.243019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.243249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.243278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.308 [2024-05-15 07:04:49.243503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.243702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.308 [2024-05-15 07:04:49.243730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.308 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.243952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.244129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.244154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.244382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.244585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.244614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.244811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.245034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.245062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.245285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.245506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.245532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.245757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.245954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.245980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.246172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.246409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.246433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.246684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.246920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.246953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.247151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.247320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.247347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.247531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.247736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.247760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.248013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.248238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.248265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.248486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.248669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.248696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.248918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.249123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.249147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.249399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.249622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.249646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.249858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.250071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.250101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.250306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.250546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.250573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.250775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.251002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.251037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.251263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.251516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.251542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.251796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.252066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.252096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.252366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.252567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.252592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.252789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.252994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.253019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.253250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.253465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.253492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.253707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.253906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.253944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.254194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.254497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.254524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.309 qpair failed and we were unable to recover it. 00:27:35.309 [2024-05-15 07:04:49.254751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.309 [2024-05-15 07:04:49.254948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.254973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.255198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.255591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.255643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.255887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.256080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.256107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.256306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.256590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.256615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.256869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.257052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.257077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.257292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.257721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.257769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.257983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.258214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.258239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.258621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.259046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.259074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.259299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.259518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.259577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.259827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.260075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.260103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.260356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.260532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.260560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.260812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.261049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.261077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.261266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.261515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.261542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.261777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.261959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.261987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.262216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.262551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.262613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.262897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.263158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.263185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.263411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.263763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.263813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.264036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.264232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.264256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.264476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.264661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.264688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.264939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.265144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.265168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.265367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.265595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.265620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.265888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.266116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.266141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.266359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.266539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.266564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.266808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.267092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.267135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.267376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.267613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.267645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.310 qpair failed and we were unable to recover it. 00:27:35.310 [2024-05-15 07:04:49.267872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.310 [2024-05-15 07:04:49.268092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.268120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.268319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.268543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.268568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.268816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.269034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.269061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.269280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.269477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.269501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.269674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.269900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.269928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.270192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.270404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.270431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.270644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.270891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.270918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.271178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.271358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.271382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.271587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.271807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.271831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.272020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.272242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.272266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.272544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.272823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.272849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.273051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.273271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.273300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.273524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.273853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.273904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.274143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.274376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.274400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.274576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.274781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.274805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.275006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.275253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.275279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.275512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.275881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.275946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.276173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.276415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.276442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.276785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.277053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.277081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.277310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.277605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.277632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.277860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.278087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.278112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.278364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.278761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.278816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.279064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.279265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.279292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.279494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.279691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.279716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.280013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.280323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.280349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.311 [2024-05-15 07:04:49.280542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.280858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.311 [2024-05-15 07:04:49.280906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.311 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.281121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.281345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.281372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.281576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.281776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.281800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.282036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.282221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.282248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.282450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.282773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.282826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.283064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.283293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.283318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.283508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.283819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.283866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.284114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.284302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.284330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.284527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.284744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.284771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.285025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.285443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.285499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.285702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.285954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.285997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.286218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.286407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.286434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.286613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.286845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.286872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.287085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.287300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.287327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.287550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.287798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.287825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.288021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.288403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.288453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.288651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.288894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.288921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.289158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.289340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.289369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.289572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.289775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.289799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.290003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.290401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.290453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.290680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.290957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.290985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.312 qpair failed and we were unable to recover it. 00:27:35.312 [2024-05-15 07:04:49.291212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.291513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.312 [2024-05-15 07:04:49.291565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.291786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.292016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.292044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.292300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.292468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.292492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.292686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.292876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.292900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.293126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.293317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.293351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.293622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.293850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.293878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.294108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.294496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.294558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.294816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.295039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.295066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.295412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.295735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.295762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.296019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.296198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.296223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.296414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.296605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.296629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.296801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.297029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.297056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.297300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.297521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.297547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.297797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.298090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.298118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.298368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.298617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.298643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.298844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.299038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.299068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.299363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.299720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.299747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.299953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.300152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.300177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.300407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.300821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.300869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.301129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.301350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.301377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.301563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.301892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.301979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.302209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.302399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.302426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.302674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.302918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.302954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.303186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.303483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.303510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.303733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.303980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.304008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.304223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.304387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.304412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.313 qpair failed and we were unable to recover it. 00:27:35.313 [2024-05-15 07:04:49.304593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.313 [2024-05-15 07:04:49.304957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.305016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.305253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.305450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.305477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.305799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.306077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.306104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.306349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.306617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.306643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.306899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.307158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.307186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.307406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.307651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.307677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.307882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.308143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.308171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.308403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.308625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.308652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.308872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.309061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.309085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.309281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.309671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.309727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.309958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.310181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.310208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.310435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.310699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.310747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.310971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.311227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.311254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.311471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.311670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.311697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.311919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.312175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.312203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.312433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.312668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.312695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.312934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.313130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.313159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.313382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.313696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.313723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.313919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.314124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.314151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.314370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.314657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.314719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.314943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.315158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.315185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.315404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.315761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.315813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.316062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.316311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.316338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.316528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.316715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.316738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.317003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.317226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.317253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.317476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.317772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.317799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.318046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.318220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.314 [2024-05-15 07:04:49.318244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.314 qpair failed and we were unable to recover it. 00:27:35.314 [2024-05-15 07:04:49.318452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.318623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.318647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.318900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.319125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.319153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.319347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.319567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.319598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.319784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.319974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.320002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.320214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.320438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.320464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.320713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.320910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.320943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.321168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.321393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.321417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.321808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.322075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.322103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.322329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.322621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.322685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.322915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.323139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.323166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.323355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.323574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.323601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.323782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.324022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.324048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.324247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.324467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.324498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.324719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.324982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.325010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.325202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.325443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.325469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.325732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.325945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.325969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.326178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.326420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.326443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.326651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.326865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.326892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.327120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.327310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.327334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.327670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.327904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.327950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.328133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.328357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.328384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.328605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.328822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.328849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.329049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.329277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.329301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.329760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.330094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.330121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.330323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.330519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.330543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.315 qpair failed and we were unable to recover it. 00:27:35.315 [2024-05-15 07:04:49.330779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.331044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.315 [2024-05-15 07:04:49.331072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.331258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.331590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.331656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.331896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.332120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.332148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.332361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.332764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.332821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.333046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.333269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.333293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.333530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.333719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.333747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.333950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.334235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.334299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.334594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.334857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.334881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.335086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.335446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.335495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.335740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.335936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.335963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.336212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.336426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.336454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.336682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.336861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.336885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.337107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.337299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.337327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.337576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.337764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.337791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.338026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.338272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.338298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.338581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.338895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.338944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.339191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.339478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.339505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.339731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.339978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.340006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.340230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.340413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.340440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.340667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.340871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.340895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.341081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.341283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.341311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.341530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.341718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.341747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.341996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.342197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.342222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.342425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.342627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.342653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.342874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.343097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.343124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.316 qpair failed and we were unable to recover it. 00:27:35.316 [2024-05-15 07:04:49.343339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.316 [2024-05-15 07:04:49.343664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.343691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.343884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.344116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.344144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.344399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.344604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.344628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.344831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.345041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.345067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.345294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.345531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.345554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.345770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.345972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.345998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.346171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.346379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.346402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.346648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.346866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.346893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.347115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.347473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.347529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.347978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.348207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.348233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.348449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.348824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.348872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.349113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.349459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.349514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.349817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.350069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.350096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.350348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.350628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.350681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.350992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.351238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.351265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.351492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.351816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.351864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.352117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.352313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.352342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.352681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.353014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.353039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.353278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.353478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.353505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.353698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.353955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.353980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.354175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.354399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.354423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.354846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.355119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.355149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.355369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.355683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.355709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.355934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.356158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.356185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.356412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.356636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.356662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.317 qpair failed and we were unable to recover it. 00:27:35.317 [2024-05-15 07:04:49.356882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.317 [2024-05-15 07:04:49.357141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.357169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.357421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.357604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.357628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.357848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.358037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.358066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.358315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.358505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.358529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.358708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.358940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.358968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.359195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.359628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.359682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.359925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.360125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.360152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.360404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.360737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.360795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.361040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.361228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.361257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.361511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.361781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.361831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.362035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.362231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.362258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.362506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.362818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.362845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.363036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.363251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.363280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.363471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.363684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.363711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.363958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.364147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.364174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.364405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.364592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.364615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.364823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.365028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.365053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.365254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.365606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.365629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.365828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.366030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.366054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.318 [2024-05-15 07:04:49.366260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.366645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.318 [2024-05-15 07:04:49.366693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.318 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.366941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.367188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.367215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.367440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.367692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.367719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.367939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.368140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.368167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.368375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.368600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.368628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.368858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.369106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.369131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.369394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.369730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.369787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.370031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.370426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.370478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.370702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.370922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.370967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.371196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.371422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.371472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.371676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.371874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.371897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.372098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.372290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.372316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.372562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.372751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.372777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.373062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.373419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.373478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.373708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.373935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.373963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.374186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.374405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.374431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.374613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.374831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.374858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.375081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.375349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.375372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.375575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.375764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.375791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.376014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.376264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.376291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.376510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.376751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.376796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.377026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.377314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.377342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.377529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.377742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.377769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.377980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.378196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.378223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.378405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.378591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.378617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.319 [2024-05-15 07:04:49.378822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.379046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.319 [2024-05-15 07:04:49.379073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.319 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.379326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.379554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.379580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.379796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.380011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.380039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.380238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.380477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.380501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.380695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.380988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.381017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.381244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.381497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.381524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.381753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.381941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.381969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.382195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.382373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.382397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.382564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.382793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.382816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.383065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.383498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.383548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.383770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.383993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.384022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.384238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.384482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.384510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.384846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.385120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.385148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.385365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.385573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.385598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.385799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.385990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.386017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.386209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.386406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.386433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.386642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.386872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.386900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.387110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.387410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.387465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.387672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.387865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.387892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.388097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.388295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.388322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.388690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.388942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.388967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.389169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.389366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.389390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.389591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.389784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.389812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.390018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.390300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.390361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.390619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.390839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.390866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.391065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.391258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.391289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.320 qpair failed and we were unable to recover it. 00:27:35.320 [2024-05-15 07:04:49.391505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.391704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.320 [2024-05-15 07:04:49.391764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.391993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.392242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.392303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.392538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.392992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.393020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.394219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.394455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.394481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.394801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.395075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.395105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.395341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.395741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.395792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.395995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.396234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.396304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.396563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.396877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.396949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.397190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.397425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.397453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.397682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.397849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.397874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.398061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.398241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.398268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.398469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.398846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.398896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.399144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.399353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.399378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.399620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.399896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.399922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.400166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.400399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.400428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.400671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.400868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.400895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.401163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.401569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.401619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.401830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.402060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.402086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.402312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.402533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.402562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.402764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.402981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.403009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.403244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.403518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.403550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.403795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.404039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.404068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.404267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.404550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.404606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.404869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.405101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.405129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.405346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.405576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.405603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.405798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.406044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.406069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.406316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.406534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.406605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.321 qpair failed and we were unable to recover it. 00:27:35.321 [2024-05-15 07:04:49.406834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.321 [2024-05-15 07:04:49.407016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.407041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.407257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.407534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.407559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.407784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.408045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.408071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.408244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.408499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.408531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.408732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.409005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.409033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.409227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.409470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.409498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.409699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.409917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.409959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.410162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.410362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.410389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.410565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.410899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.410970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.411173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.411396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.411424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.411650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.411874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.411901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.412117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.412396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.412454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.412673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.412868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.412895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.413131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.413400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.413425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.413660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.413883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.413912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.414159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.414378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.414405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.414627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.414851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.414878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.415087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.415295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.415320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.415526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.415776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.415803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.416115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.416307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.416336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.416527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.416757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.416809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.417033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.417228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.417256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.417477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.417703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.417730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.417959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.418162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.418191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.418395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.418597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.418622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.418860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.419090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.419115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.419327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.419523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.419551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.419777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.420035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.420063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.420308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.420510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.322 [2024-05-15 07:04:49.420535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.322 qpair failed and we were unable to recover it. 00:27:35.322 [2024-05-15 07:04:49.420770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.420995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.421023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.421214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.421447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.421472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.421757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.422081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.422109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.422338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.422536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.422563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.422786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.423012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.423040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.423266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.423460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.423488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.423862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.424106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.424134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.424320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.424542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.424570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.424776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.424999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.425025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.425227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.425445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.425472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.425683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.425878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.425905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.426155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.426332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.426358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.426583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.426790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.426815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.427013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.427217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.427247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.427552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.427807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.427834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.428037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.428309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.428358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.428582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.428808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.428835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.429065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.429321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.323 [2024-05-15 07:04:49.429375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.323 qpair failed and we were unable to recover it. 00:27:35.323 [2024-05-15 07:04:49.429590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.429790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.429815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.429979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.430156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.430181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.430437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.430820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.430879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.431114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.431364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.431391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.431645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.431875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.431902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.432150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.432337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.432362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.432534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.432711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.432735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.432944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.433132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.433164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.433372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.433546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.433571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.433767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.433990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.434019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.434206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.434413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.434441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.434656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.434843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.434870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.435097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.435293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.435318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.435519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.435695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.435723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.435925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.436101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.436125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.436383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.436744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.436805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.437071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.437338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.437389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.437643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.437824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.437850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.438091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.438324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.438349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.438583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.438836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.438864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.439072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.439297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.439325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.439517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.439752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.439802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.440003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.440210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.440235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.440419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.440618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.440645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.440869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.441085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.441113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.324 qpair failed and we were unable to recover it. 00:27:35.324 [2024-05-15 07:04:49.441339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.324 [2024-05-15 07:04:49.441595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.441648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.441854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.442062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.442087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.442286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.442574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.442620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.442847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.443041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.443069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.443292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.443514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.443541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.443754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.443981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.444006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.444209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.444388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.444413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.444648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.444849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.444873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.445051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.445255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.445280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.445509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.445792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.445842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.446056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.446272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.446299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.446486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.446787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.446842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.447037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.447260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.447287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.447512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.447868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.447923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.448153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.448353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.448378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.448610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.448806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.448834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.449035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.449336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.449385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.449624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.449875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.449902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.450113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.450309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.450359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.450582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.450812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.450862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.451096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.451322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.451349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.451560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.451818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.451842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.452050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.452309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.452337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.452575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.452754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.452778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.452974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.453194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.453221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.325 qpair failed and we were unable to recover it. 00:27:35.325 [2024-05-15 07:04:49.453479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.325 [2024-05-15 07:04:49.453813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.453863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.454086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.454279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.454307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.454609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.454830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.454857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.455070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.455237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.455261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.455464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.455722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.455749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.455979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.456180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.456204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.456396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.456620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.456647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.456865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.457066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.457092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.457319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.457554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.457584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.457806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.458037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.458067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.458292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.458486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.458510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.458743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.458956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.458984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.459219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.459494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.459542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.459741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.459991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.460017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.460248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.460659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.460710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.460959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.461162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.461186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.461440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.461684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.461709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.461898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.462102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.462129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.462533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.462807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.462831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.463061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.463253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.463282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.463505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.463786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.463815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.464030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.464250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.464277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.464506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.464734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.464760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.464984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.465167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.465191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.465416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.465676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.465702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.326 [2024-05-15 07:04:49.465927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.466181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.326 [2024-05-15 07:04:49.466208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.326 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.466397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.466611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.466638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.466837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.467083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.467111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.467306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.467583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.467631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.467883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.468115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.468146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.468356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.468586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.468615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.468824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.469031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.469058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.469273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.469465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.469497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.469750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.469983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.470013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.470254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.470444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.470473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.470658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.470836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.470862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.471086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.471293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.471321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.471548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.471741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.471773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.471974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.472197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.472225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.472448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.472706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.472737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.472991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.473245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.473275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.473505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.473729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.473760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.473990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.474171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.474200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.474399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.474655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.474683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.474905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.475114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.475142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.475393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.475617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.475643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.475844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.476029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.476057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.476275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.476470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.476499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.476725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.476947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.476975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.477229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.477481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.477525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.327 [2024-05-15 07:04:49.477801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.478002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.327 [2024-05-15 07:04:49.478027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.327 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.478227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.478580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.478635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.478857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.479054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.479082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.479300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.479518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.479545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.479757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.479956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.479985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.480203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.480391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.480418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.480668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.480859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.480886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.481125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.481302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.481326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.481572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.481752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.481778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.481996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.482198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.482228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.482434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.482723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.482752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.482959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.483244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.483282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.483597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.483789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.483816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.484075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.484310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.484334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.484535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.484726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.484771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.484973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.485237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.485260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.485469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.485763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.485790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.486018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.486220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.486247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.486444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.486685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.486708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.486965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.487248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.487275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.487566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.487807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.328 [2024-05-15 07:04:49.487834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.328 qpair failed and we were unable to recover it. 00:27:35.328 [2024-05-15 07:04:49.488061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.488233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.488257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.488487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.488803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.488869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.489080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.489272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.489299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.489555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.489803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.489861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.490088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.490417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.490481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.490695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.490917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.490951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.491175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.491432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.491459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.491727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.491987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.492015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.492241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.492455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.492503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.492708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.492961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.492986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.493167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.493370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.493396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.493754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.493990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.494018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.494271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.494682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.494733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.494967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.495218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.495242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.495425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.495648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.495675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.495866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.496087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.496115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.496333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.496508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.496532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.496762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.496956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.496983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.497201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.497415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.497443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.497744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.497978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.498006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.498235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.498527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.498585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.498809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.499029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.499057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.499270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.499483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.499509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.499761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.499959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.499985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.500161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.500405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.500431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.500661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.500883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.500910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.501145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.501345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.501369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.329 qpair failed and we were unable to recover it. 00:27:35.329 [2024-05-15 07:04:49.501547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.501724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.329 [2024-05-15 07:04:49.501748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.501974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.502178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.502205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.502423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.502681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.502705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.502917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.503205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.503233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.503575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.503924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.503958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.504191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.504578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.504625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.504856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.505081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.505110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.505356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.505598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.505645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.505897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.506126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.506153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.506371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.506581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.506607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.506807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.507059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.507087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.507303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.507615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.507669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.507915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.508130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.508161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.508358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.508581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.508604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.508838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.509033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.509061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.509277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.509512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.509557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.509830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.510024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.510049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.510225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.510493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.510520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.510738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.510941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.510968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.511220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.511444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.511471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.511694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.511905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.511941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.512170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.512391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.512415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.512651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.512842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.512873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.513102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.513353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.513401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.513745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.513944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.513972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.514221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.514529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.514568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.514775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.515006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.515033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.515247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.515488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.515515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.515790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.516046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.516075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.330 qpair failed and we were unable to recover it. 00:27:35.330 [2024-05-15 07:04:49.516322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.330 [2024-05-15 07:04:49.516685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.516747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.516967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.517171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.517196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.517441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.517625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.517652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.517876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.518066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.518094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.518332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.518601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.518628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.518849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.519064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.519092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.519343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.519563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.519590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.519824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.520021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.520048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.520253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.520512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.520538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.520735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.520941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.520966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.521172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.521357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.521382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.521665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.521908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.521951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.522185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.522375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.522402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.522622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.522859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.522882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.523108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.523496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.523545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.523775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.523998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.524025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.524247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.524551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.524602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.524824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.525054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.525079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.525296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.525546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.525569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.525781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.526013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.526042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.526224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.526476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.526503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.526699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.526935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.526963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.527192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.527419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.527447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.527886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.528143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.528170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.528388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.528644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.528687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.528901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.529132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.529159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.331 qpair failed and we were unable to recover it. 00:27:35.331 [2024-05-15 07:04:49.529408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.529636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.331 [2024-05-15 07:04:49.529681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.332 qpair failed and we were unable to recover it. 00:27:35.332 [2024-05-15 07:04:49.529913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.332 [2024-05-15 07:04:49.530148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.332 [2024-05-15 07:04:49.530176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.332 qpair failed and we were unable to recover it. 00:27:35.332 [2024-05-15 07:04:49.530405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.332 [2024-05-15 07:04:49.530631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.332 [2024-05-15 07:04:49.530658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.332 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.530877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.531130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.531158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.531378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.531589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.531616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.531838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.532043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.532071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.532282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.532448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.532473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.532677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.532941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.532971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.533195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.533461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.533514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.533762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.534013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.534038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.534241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.534628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.534676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.534900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.535159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.535186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.535380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.535602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.535631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.535967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.536188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.536215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.536465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.536674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.536698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.536927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.537151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.537179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.537376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.537603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.537628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.538006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.538193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.538221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.538411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.538689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.538739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.538970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.539172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.539196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.539396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.539765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.539794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.539999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.540273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.540334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.540556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.540774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.540801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.540993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.541216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.541243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.541468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.541678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.541703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.541895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.542112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.542137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.542337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.542567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.542594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.542808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.543002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.543030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.543225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.543416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.543443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.543658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.543868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.543895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.544116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.544463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.544513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.602 qpair failed and we were unable to recover it. 00:27:35.602 [2024-05-15 07:04:49.544765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.544978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.602 [2024-05-15 07:04:49.545006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.545190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.545568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.545617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.545870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.546039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.546064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.546278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.546504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.546531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.546722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.546955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.546983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.547202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.547456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.547480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.547674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.547926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.547960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.548150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.548461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.548527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.548757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.549005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.549034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.549241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.549440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.549467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.549697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.549881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.549908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.550126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.550320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.550347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.550603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.550829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.550853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.551118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.551340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.551397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.551732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.552013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.552039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.552244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.552500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.552527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.552720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.552944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.552974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.553199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.553388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.553415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.553611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.553835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.553862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.554082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.554332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.554359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.554587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.554775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.554802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.555014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.555233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.555260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.555527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.555914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.555992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.556192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.556559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.556607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.556855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.557055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.557080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.557308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.557543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.557568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.557857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.558077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.558105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.558304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.558535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.558559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.558804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.559035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.559063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.559306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.559501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.559527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.603 qpair failed and we were unable to recover it. 00:27:35.603 [2024-05-15 07:04:49.559723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.603 [2024-05-15 07:04:49.559975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.560000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.560210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.560383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.560407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.560596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.560843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.560869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.561083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.561271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.561298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.561517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.561902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.561974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.562208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.562406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.562431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.562681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.562896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.562923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.563189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.563436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.563463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.563688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.563909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.563947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.564178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.564534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.564581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.564805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.565002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.565030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.565278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.565503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.565528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.565747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.565973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.566000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.566227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.566445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.566470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.566705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.566908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.566940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.567174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.567354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.567377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.567642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.567866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.567894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.568107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.568301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.568328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.568554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.568797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.568824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.569052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.569252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.569277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.569505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.569826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.569879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.570103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.570509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.570556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.570778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.570998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.571027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.571276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.571468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.571495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.571718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.571902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.571935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.572156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.572402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.572429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.572651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.572869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.572896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.573123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.573431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.573490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.573824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.574071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.574099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.574299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.574574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.604 [2024-05-15 07:04:49.574625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.604 qpair failed and we were unable to recover it. 00:27:35.604 [2024-05-15 07:04:49.574881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.575118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.575146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.575370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.575620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.575647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.575838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.576060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.576088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.576312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.576559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.576586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.576836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.577086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.577111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.577328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.577526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.577550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.577890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.578140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.578164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.578399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.578810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.578860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.579087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.579312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.579340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.579545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.579945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.579999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.580219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.580592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.580646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.580846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.581086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.581114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.581318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.581534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.581560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.581756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.581969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.581997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.582249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.582494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.582519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.582710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.582899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.582926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.583157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.583357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.583381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.583634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.584018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.584046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.584274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.584498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.584525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.584714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.584970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.584998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.585260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.585482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.585509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.585765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.586019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.586047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.586272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.586464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.586488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.586680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.586895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.586922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.587156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.587494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.587548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.587779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.588003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.588029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.588290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.588538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.588562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.588761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.588933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.588958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.589153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.589474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.589527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.605 [2024-05-15 07:04:49.589742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.589953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.605 [2024-05-15 07:04:49.589987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.605 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.590238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.590405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.590430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.590657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.590873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.590900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.591143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.591431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.591458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.591714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.591945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.591970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.592235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.592436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.592461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.592687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.592885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.592912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.593170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.593443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.593470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.593713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.593925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.593965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.594159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.594381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.594408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.594649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.594894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.594925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.595163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.595418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.595442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.595695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.595889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.595916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.596116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.596311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.596335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.596537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.596705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.596729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.596988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.597220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.597247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.597444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.597689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.597740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.597993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.598216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.598243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.598464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.598765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.598823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.599046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.599392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.599438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.599701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.599926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.599959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.600163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.600494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.600557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.600779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.600970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.600998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.601217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.601452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.601479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.606 [2024-05-15 07:04:49.601671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.601857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.606 [2024-05-15 07:04:49.601884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.606 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.602123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.602305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.602333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.602574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.602767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.602794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.603039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.603323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.603348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.603599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.603795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.603819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.604046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.604455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.604510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.604722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.604928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.604959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.605197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.605476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.605503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.605698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.605954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.605983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.606234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.606457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.606483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.606687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.606911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.606944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.607171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.607389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.607416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.607633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.607943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.607968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.608219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.608403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.608430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.608683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.608853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.608877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.609103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.609292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.609319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.609532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.609836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.609885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.610097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.610351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.610399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.610644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.610897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.610924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.611164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.611350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.611377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.611597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.612005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.612033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.612443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.612913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.612991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.613220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.613572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.613621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.613848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.614071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.614099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.614326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.614574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.614601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.614989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.615238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.615265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.615516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.615877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.615936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.616187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.616383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.616412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.616601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.616994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.617022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.617353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.617599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.607 [2024-05-15 07:04:49.617626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.607 qpair failed and we were unable to recover it. 00:27:35.607 [2024-05-15 07:04:49.617872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.618080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.618105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.618284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.618459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.618485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.618713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.618905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.618940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.619192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.619583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.619631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.619856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.620080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.620107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.620332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.620541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.620565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.620792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.621033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.621061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.621406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.621778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.621808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.622044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.622315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.622342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.622595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.622914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.622948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.623179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.623466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.623520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.623915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.624154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.624181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.624407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.624714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.624778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.625008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.625190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.625214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.625420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.625676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.625700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.625953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.626175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.626202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.626427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.626771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.626824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.627025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.627235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.627259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.627516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.627955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.628012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.628221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.628570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.628616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.628872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.629124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.629152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.629355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.629571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.629594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.629825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.630020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.630048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.630270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.630625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.630673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.630874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.631106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.631131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.631321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.631677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.631731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.631952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.632161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.632186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.632409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.632656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.632683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.632923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.633156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.633183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.608 [2024-05-15 07:04:49.633403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.633784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.608 [2024-05-15 07:04:49.633833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.608 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.634067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.634295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.634323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.634545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.634763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.634790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.635024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.635403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.635454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.635679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.635879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.635909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.636122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.636310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.636337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.636550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.636772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.636799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.637070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.637285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.637309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.637533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.637762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.637789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.638017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.638219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.638244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.638521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.638810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.638858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.639093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.639295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.639322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.639522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.639743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.639770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.640002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.640307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.640358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.640607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.640807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.640834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.641060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.641242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.641266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.641467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.641686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.641714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.641951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.642199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.642229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.642497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.642692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.642720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.642913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.643129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.643155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.643323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.643500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.643524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.643707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.643937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.643965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.644163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.644381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.644408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.644638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.644837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.644861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.645126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.645390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.645415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.645619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.645888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.645916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.646129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.646350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.646377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.646574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.646822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.646849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.647038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.647256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.647280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.647459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.647778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.647836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.609 [2024-05-15 07:04:49.648054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.648273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.609 [2024-05-15 07:04:49.648368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.609 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.648596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.648817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.648847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.649075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.649282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.649306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.649505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.649728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.649756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.649961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.650162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.650189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.650360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.650561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.650585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.650783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.650986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.651012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.651214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.651422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.651450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.651678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.651876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.651905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.652135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.652329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.652356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.652566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.652771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.652799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.653013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.653219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.653298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.653559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.653754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.653783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.654017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.654192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.654217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.654410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.654617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.654641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.654845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.655069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.655094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.655295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.655491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.655515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.655690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.655941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.655969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.656171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.656395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.656424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.656646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.656836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.656863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.657114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.657313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.657337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.657515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.657778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.657836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.658085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.658307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.658334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.658560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.658770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.658797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.659018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.659198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.659225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.659460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.659745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.659815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.660053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.660386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.610 [2024-05-15 07:04:49.660449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.610 qpair failed and we were unable to recover it. 00:27:35.610 [2024-05-15 07:04:49.660696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.660917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.660950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.661209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.661512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.661581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.661772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.662024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.662052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.662263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.662576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.662640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.662866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.663094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.663119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.663343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.663566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.663596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.663791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.663995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.664021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.664245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.664447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.664475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.664671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.664891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.664920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.665132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.665380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.665405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.665590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.665815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.665843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.666057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.666240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.666267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.666469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.666654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.666683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.666869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.667101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.667135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.667347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.667569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.667596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.667813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.668035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.668064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.668290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.668485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.668515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.668737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.668909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.668944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.669142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.669360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.669388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.669587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.669800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.669828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.670045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.670304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.670359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.670610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.670830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.670859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.671074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.671275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.671303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.671536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.671713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.671744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.671920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.672127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.672156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.672487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.672812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.672837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.673039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.673255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.673283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.673505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.673758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.673786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.611 [2024-05-15 07:04:49.673983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.674171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.611 [2024-05-15 07:04:49.674199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.611 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.674469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.674705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.674733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.674955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.675158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.675186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.675390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.675669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.675694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.675875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.676100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.676131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.676353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.676599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.676631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.676862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.677068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.677096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.677322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.677546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.677597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.677792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.677980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.678008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.678209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.678393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.678422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.678660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.678886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.678913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.679124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.679353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.679379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.679580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.679891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.679959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.680184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.680425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.680450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.680673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.680895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.680925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.681127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.681332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.681357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.681561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.681904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.681971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.682204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.682402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.682426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.682601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.682826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.682853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.683076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.683291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.683321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.683543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.683736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.683763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.684064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.684259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.684286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.684511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.684715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.684741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.684916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.685108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.685135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.685343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.685540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.685565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.685818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.686020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.686046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.686259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.686444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.686472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.686679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.686883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.686911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.687154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.687427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.687479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.687734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.687951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.687980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.612 qpair failed and we were unable to recover it. 00:27:35.612 [2024-05-15 07:04:49.688207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.688399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.612 [2024-05-15 07:04:49.688427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.688678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.688845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.688870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.689077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.689324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.689349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.689613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.689854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.689882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.690103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.690344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.690403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.690598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.690816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.690844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.691066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.691236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.691262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.691468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.691752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.691778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.692008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.692260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.692316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.692543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.692763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.692788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.692969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.693147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.693171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.693370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.693642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.693693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.693895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.694124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.694154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.694384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.694707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.694758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.695003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.695253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.695278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.695648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.695923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.695963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.696180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.696532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.696585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.696813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.697023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.697048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.697274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.697495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.697523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.697752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.697980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.698007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.698203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.698386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.698413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.698633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.698846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.698873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.699132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.699412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.699463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.699693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.699884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.699913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.700118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.700361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.700418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.700642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.700833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.700861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.701075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.701279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.701310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.701512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.701861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.701914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.702120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.702348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.702377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.702580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.702770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.702798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.613 qpair failed and we were unable to recover it. 00:27:35.613 [2024-05-15 07:04:49.702992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.613 [2024-05-15 07:04:49.703179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.703208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.703518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.703862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.703918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.704154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.704355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.704379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.704578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.704804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.704829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.705027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.705336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.705389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.705619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.705814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.705841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.706070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.706361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.706412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.706611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.706822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.706849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.707072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.707269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.707299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.707481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.707683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.707710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.707940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.708122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.708146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.708322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.708546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.708573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.708801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.709027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.709055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.709281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.709500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.709525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.709733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.709953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.709977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.710149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.710338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.710363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.710558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.710788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.710838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.711072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.711273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.711297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.711493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.711664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.711690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.711918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.712158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.712186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.712402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.712782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.712832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.713083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.713312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.713337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.713536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.713706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.713730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.713967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.714207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.714231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.714451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.714685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.714712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.714941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.715161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.715186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.715389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.715562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.715588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.715823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.716048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.716076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.716330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.716536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.716560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.716848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.717108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.717134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.614 qpair failed and we were unable to recover it. 00:27:35.614 [2024-05-15 07:04:49.717312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.614 [2024-05-15 07:04:49.717535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.717563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.717786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.718007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.718036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.718288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.718639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.718699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.718917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.719133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.719160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.719365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.719619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.719646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.719898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.720097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.720125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.720333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.720571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.720598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.720842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.721068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.721098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.721342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.721542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.721566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.721818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.722043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.722071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.722295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.722578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.722607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.722810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.723061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.723086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.723262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.723473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.723500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.723728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.723938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.723963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.724188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.724576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.724628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.725014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.725278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.725303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.725531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.725702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.725727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.725924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.726142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.726173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.726427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.726778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.726832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.727068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.727238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.727263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.727462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.727666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.727692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.727956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.728179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.728206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.728424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.728674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.728724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.728952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.729226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.729251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.729478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.729738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.729765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.615 [2024-05-15 07:04:49.729982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.730207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.615 [2024-05-15 07:04:49.730234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.615 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.730463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.730720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.730770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.730998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.731218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.731245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.731474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.731770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.731833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.732055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.732311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.732338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.732554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.732872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.732925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.733184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.733525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.733581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.733787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.734039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.734067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.734314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.734593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.734622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.734849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.735067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.735096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.735531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.735829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.735855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.736087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.736287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.736317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.736538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.736792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.736819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.737053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.737292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.737338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.737647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.737889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.737916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.738155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.738411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.738437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.738664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.738885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.738912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.739150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.739325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.739349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.739547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.739720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.739745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.739942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.740168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.740195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.740440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.740796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.740852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.741070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.741274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.741301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.741678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.741947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.741975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.742205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.742428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.742452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.742673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.742869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.742895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.743100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.743282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.743306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.743492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.743757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.743786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.744013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.744406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.744457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.744683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.744940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.744968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.745200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.745467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.745517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.616 qpair failed and we were unable to recover it. 00:27:35.616 [2024-05-15 07:04:49.745850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.616 [2024-05-15 07:04:49.746100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.746126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.746326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.746540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.746567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.746761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.746989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.747014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.747222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.747516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.747563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.747832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.748068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.748092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.748297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.748523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.748550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.748772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.748986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.749014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.749234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.749491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.749519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.749869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.750120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.750148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.750376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.750570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.750594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.750820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.751019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.751046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.751240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.751528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.751583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.751841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.752089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.752116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.752347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.752519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.752549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.752794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.753050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.753076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.753300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.753519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.753546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.753766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.753994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.754022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.754250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.754449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.754473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.754666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.754887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.754911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.755120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.755344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.755370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.755599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.755808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.755836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.756055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.756230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.756255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.756495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.756725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.756749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.756974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.757287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.757342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.757730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.758012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.758039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.758285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.758544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.758568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.758747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.758983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.759011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.759271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.759454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.759481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.759741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.759962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.759987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.760189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.760450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.617 [2024-05-15 07:04:49.760476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.617 qpair failed and we were unable to recover it. 00:27:35.617 [2024-05-15 07:04:49.760700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.760951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.760982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.761201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.761479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.761506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.761913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.762180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.762208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.762433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.762737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.762814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.763054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.763271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.763299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.763506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.763852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.763918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.764177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.764363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.764390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.764587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.764789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.764813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.764986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.765205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.765233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.765472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.765667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.765692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.765914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.766123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.766150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.766374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.766553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.766578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.766827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.767054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.767081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.767271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.767518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.767542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.767910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.768198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.768225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.768478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.768717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.768745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.768981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.769201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.769229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.769458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.769842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.769892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.770123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.770289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.770316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.770543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.770722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.770747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.770973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.771208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.771232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.771435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.771797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.771851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.772071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.772377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.772432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.772672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.772890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.772917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.773152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.773381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.773409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.773662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.773864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.773888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.774128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.774375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.774402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.774629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.774854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.774881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.775138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.775345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.775369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.618 [2024-05-15 07:04:49.775543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.775737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.618 [2024-05-15 07:04:49.775766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.618 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.775952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.776176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.776203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.776423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.776597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.776636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.776870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.777089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.777117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.777365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.777578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.777602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.777827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.778047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.778079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.778300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.778645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.778700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.778954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.779187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.779212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.779418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.779733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.779794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.780021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.780243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.780266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.780499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.780908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.780964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.781164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.781493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.781545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.781770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.781982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.782010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.782211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.782410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.782435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.782720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.782924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.782960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.783178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.783421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.783452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.783680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.783924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.783959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.784183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.784466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.784514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.784714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.784939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.784967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.785219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.785458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.785505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.785736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.785955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.785983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.786204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.786459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.786483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.786682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.786975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.787003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.787196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.787490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.787546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.787787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.788098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.788125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.619 qpair failed and we were unable to recover it. 00:27:35.619 [2024-05-15 07:04:49.788426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.788756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.619 [2024-05-15 07:04:49.788780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.788990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.789191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.789219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.789430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.789618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.789645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.789861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.790099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.790127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.790349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.790573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.790601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.790807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.791041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.791069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.791295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.791543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.791569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.791764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.791952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.791980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.792190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.792475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.792498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.792695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.792861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.792885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.793105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.793522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.793570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.793800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.794016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.794045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.794277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.794498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.794528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.794759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.794951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.794977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.795176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.795352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.795376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.795603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.795787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.795814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.796038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.796224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.796251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.796442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.796627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.796650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.796941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.797168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.797192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.797577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.797995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.798023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.798277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.798444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.798468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.798680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.798916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.798950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.799141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.799355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.799382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.799589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.799865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.799916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.800126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.800512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.800564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.800752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.801011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.801041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.801242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.801455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.801482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.801732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.801983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.802008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.802226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.802409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.802436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.802636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.802835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.802859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.620 qpair failed and we were unable to recover it. 00:27:35.620 [2024-05-15 07:04:49.803072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.620 [2024-05-15 07:04:49.803299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.803326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.803574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.803808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.803833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.804123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.804341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.804369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.804575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.804805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.804832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.805061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.805248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.805275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.805526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.805904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.805963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.806181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.806375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.806404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.806599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.806766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.806790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.806972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.807205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.807232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.807458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.807686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.807712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.807988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.808215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.808254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.808457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.808807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.808876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.809074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.809290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.809317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.809517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.809736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.809762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.809955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.810169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.810196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.810413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.810622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.810646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.810864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.811091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.811117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.811346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.811589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.811616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.811857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.812111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.812139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.812390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.812624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.812665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.812891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.813135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.813163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.813419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.813730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.813784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.813987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.814402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.814461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.814733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.814965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.814993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.815226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.815534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.815585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.815809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.816005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.816033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.816424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.816830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.816878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.817133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.817370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.817412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.817630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.817854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.817877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.818108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.818291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.621 [2024-05-15 07:04:49.818318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.621 qpair failed and we were unable to recover it. 00:27:35.621 [2024-05-15 07:04:49.818573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.818958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.819009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.819218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.819539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.819588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.819797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.820028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.820056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.820273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.820600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.820666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.820912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.821141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.821169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.821375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.821554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.821579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.821803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.822008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.822037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.822218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.822470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.822495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.822697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.822895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.822924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.823182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.823362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.823389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.823593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.823792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.823821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.824069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.824289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.824313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.824518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.824799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.622 [2024-05-15 07:04:49.824851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.622 qpair failed and we were unable to recover it. 00:27:35.622 [2024-05-15 07:04:49.825062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.825249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.825273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.825471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.825700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.825728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.825965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.826189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.826217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.826448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.826656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.826681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.826893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.827104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.827134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.827389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.827761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.827813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.828071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.828261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.828285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.828575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.828969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.828997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.829221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.829613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.829663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.829900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.830135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.830164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.830415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.830733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.830760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.830960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.831219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.831246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.831501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.831699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.831725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.831961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.832187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.832215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.832464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.832675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.832703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.832950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.833155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.833184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.833405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.833686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.833711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.833893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.834118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.834146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.834377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.834666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.834691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.834865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.835118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.835150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.835379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.835626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.835654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.835869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.836121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.836148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.836364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.836570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.836598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.836897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.837145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.837174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.837392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.837594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.837618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.837805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.838014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.838042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.838290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.838537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.838561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.838847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.839100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.839130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.839341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.839583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.839631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.839863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.840051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.840079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.840310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.840504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.840533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.840957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.841201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.841229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.913 qpair failed and we were unable to recover it. 00:27:35.913 [2024-05-15 07:04:49.841425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.841647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.913 [2024-05-15 07:04:49.841674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.841891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.842101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.842126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.842361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.842751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.842804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.843042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.843243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.843268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.843503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.843733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.843760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.844003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.844238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.844266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.844491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.844779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.844806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.845008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.845239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.845264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.845469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.845819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.845875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.846138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.846344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.846369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.846574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.846774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.846798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.846999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.847173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.847207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.847411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.847610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.847635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.847806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.848006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.848031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.848260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.848954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.848986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.849218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.849469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.849493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.849692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.849890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.849914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.850151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.850350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.850374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.850583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.850784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.850808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.851017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.851221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.851247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.851453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.851650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.851674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.851845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.852043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.852075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.852292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.852518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.852542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.852723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.852956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.852981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.853157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.853394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.853418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.853583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.853789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.853813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.854014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.854216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.854241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.854420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.854663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.854687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.854886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.855074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.855100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.855300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.855499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.855523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.855715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.855944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.855970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.856145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.856355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.856379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.856579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.856802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.856826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.857028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.857233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.857258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.857458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.857685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.857710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.857911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.858108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.858134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.858305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.858501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.858526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.858737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.858968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.858994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.859182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.859373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.859402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.859564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.859789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.859814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.860020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.860196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.860226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.860425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.860627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.860652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.914 [2024-05-15 07:04:49.860829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.861030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.914 [2024-05-15 07:04:49.861058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.914 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.861240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.861465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.861489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.861654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.861887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.861912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.862136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.862326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.862350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.862516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.862682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.862706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.862915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.863152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.863177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.863394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.863620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.863648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.863826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.864032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.864057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.864234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.864461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.864486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.864661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.864887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.864912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.865116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.865287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.865312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.865506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.865735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.865759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.865960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.866172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.866199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.866403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.866604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.866629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.866833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.867048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.867073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.867275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.867445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.867470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.867702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.867925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.867958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.868160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.868335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.868359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.868528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.868709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.868735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.868945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.869122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.869146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.869348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.869553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.869578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.869775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.869973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.869998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.870165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.870392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.870417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.870623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.870847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.870871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.871052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.871226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.871251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.871427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.871627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.871651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.871856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.872046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.872071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.872247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.872412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.872438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.872635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.872833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.872858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.873063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.873233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.873257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.873480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.873711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.873736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.873967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.874134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.874158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.874385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.874556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.874581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.874774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.874950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.874976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.875146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.875316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.875340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.875547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.875750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.875776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.876005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.876208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.876233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.876433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.876642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.876667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.876864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.877057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.877082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.877277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.877472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.877496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.877663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.877859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.877885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.878084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.878257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.878282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.878477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.878641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.915 [2024-05-15 07:04:49.878665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.915 qpair failed and we were unable to recover it. 00:27:35.915 [2024-05-15 07:04:49.879304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.879634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.879662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.879894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.880100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.880126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.880370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.880583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.880608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.880813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.880991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.881017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.881245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.881443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.881470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.881696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.881889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.881916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.882156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.882357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.882384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.882627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.882851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.882878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.883104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.883316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.883340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.883510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.883791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.883837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.884067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.884282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.884309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.884529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.884810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.884835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.885044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.885266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.885313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.885580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.885802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.885847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.886621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.886876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.886910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.887132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.887391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.887419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.887616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.887796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.887829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.888027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.888228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.888256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.888444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.888665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.888693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.888895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.889151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.889176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.889381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.889622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.889669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.889896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.890691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.890725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.890980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.891159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.891184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.891351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.891529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.891554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.891780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.892012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.892041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.892299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.892520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.892548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.892758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.892962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.892988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.893157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.893351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.893380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.893550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.893783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.893828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.894053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.894263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.894309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.894515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.894716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.894740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.895025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.895234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.895259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.895459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.895703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.895753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.895992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.896218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.896246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.916 [2024-05-15 07:04:49.896496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.896746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.916 [2024-05-15 07:04:49.896792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.916 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.897006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.897192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.897219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.897420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.897603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.897628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.897823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.898000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.898026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.898200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.898406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.898433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.898778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.898994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.899023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.899229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.899433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.899458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.899678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.899895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.899922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.900122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.900330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.900358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.900557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.900804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.900848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.901060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.901235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.901261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.901444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.901624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.901650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.901900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.902105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.902130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.902327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.902548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.902576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.902804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.903007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.903034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.903206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.903399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.903423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.903588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.903758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.903783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.903998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.904178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.904202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.904375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.904549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.904576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.904808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.905003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.905029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.905214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.905394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.905419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.905632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.905812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.905839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.906048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.906256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.906281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.906459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.906668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.906693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.906892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.907080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.907105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.907303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.907506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.907531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.907779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.908021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.908047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.908228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.908403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.908427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.908681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.908872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.908901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.909142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.909356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.909381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.909638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.909847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.909874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.910081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.910265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.910295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.910495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.910660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.910685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.910865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.911046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.911072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.911237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.911411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.911435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.911656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.911870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.911898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.912105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.912282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.912307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.912512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.912717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.912742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.913020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.913201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.913225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.913403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.913606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.913631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.913861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.914055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.914080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.914275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.914512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.914536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.914796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.915057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.917 [2024-05-15 07:04:49.915082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.917 qpair failed and we were unable to recover it. 00:27:35.917 [2024-05-15 07:04:49.915264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.915479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.915504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.915681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.915879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.915906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.916073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248e4b0 is same with the state(5) to be set 00:27:35.918 [2024-05-15 07:04:49.916298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.916496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.916529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.916757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.917032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.917063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.917257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.917469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.917499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.917741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.917943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.917981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.918188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.918443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.918490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.918747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.918955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.918985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.919191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.919411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.919466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.919699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.919972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.920003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.920203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.920435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.920481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.920753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.920974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.921006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.921254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.921510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.921555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.921803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.922063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.922093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.922322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.922604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.922651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.922911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.923107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.923137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.923415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.923698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.923746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.924052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.924291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.924337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.924621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.924856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.924890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.925129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.925362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.925408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.925659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.925896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.925925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.926277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.926496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.926525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.926733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.926961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.926992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.927269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.927537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.927582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.927808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.927995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.928025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.928250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.928478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.928525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.928845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.929047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.929076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.929327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.929541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.929570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.929791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.930005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.930042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.930268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.930507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.930554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.930858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.931055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.931086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.931312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.931528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.931557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.931783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.932001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.932031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.932269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.932533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.932563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.932762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.932986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.933015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.933222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.933410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.933442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.933699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.933903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.933940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.934149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.934393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.934422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.934676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.934887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.934921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.935161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.935381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.935430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.935656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.935889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.935919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.936132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.936343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.936373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.936619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.936848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.918 [2024-05-15 07:04:49.936878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.918 qpair failed and we were unable to recover it. 00:27:35.918 [2024-05-15 07:04:49.937085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.937287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.937333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.937591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.937823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.937852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.938052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.938272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.938303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.938547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.938761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.938791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.938990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.939183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.939213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.939460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.939701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.939753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.939991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.940177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.940206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.940417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.940742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.940789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.941015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.941230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.941260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.941513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.941697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.941727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.941982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.942162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.942191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.942413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.942692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.942738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.942928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.943180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.943210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.943434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.943657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.943687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.943940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.944127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.944156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.944384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.944596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.944626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.944880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.945080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.945110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.945337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.945618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.945664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.945857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.946066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.946096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.946353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.946583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.946630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.946857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.947059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.947106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.947367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.947624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.947676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.947927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.948108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.948137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.948387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.948587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.948634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.948864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.949081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.949110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.949401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.949635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.949681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.949960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.950154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.950184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.950406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.950616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.950663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.950889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.951110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.951140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.951387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.951645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.951690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.951884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.952081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.952111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.952345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.952616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.952662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.952890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.953092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.953122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.953381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.953638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.953667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.953858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.954058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.954089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.919 qpair failed and we were unable to recover it. 00:27:35.919 [2024-05-15 07:04:49.954428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.919 [2024-05-15 07:04:49.954703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.954732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.954967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.955184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.955231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.955524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.955832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.955861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.956089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.956296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.956325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.956525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.956731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.956761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.957011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.957328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.957390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.957680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.957981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.958011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.958238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.958493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.958538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.958816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.959062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.959093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.959348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.959680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.959735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.959971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.960184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.960230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.960537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.960789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.960836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.961073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.961304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.961350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.961571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.961777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.961806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.962027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.962267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.962313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.962583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.962812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.962841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.963064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.963299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.963347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.963644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.963880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.963908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.964164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.964431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.964477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.964754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.964991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.965019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.965275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.965536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.965581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.965774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.965976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.966005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.966261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.966546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.966574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.966795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.967041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.967085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.967337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.967598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.967643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.967857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.968067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.968096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.968380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.968668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.968697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.968979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.969238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.969267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.969533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.969822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.969851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.970109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.970391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.970429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.970703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.970970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.971000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.971278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.971521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.971568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.971825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.972076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.972105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.972394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.972719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.972765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.973048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.973408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.973463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.973727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.973970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.974000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.974224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.974467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.974498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.974866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.975166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.975194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.975467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.975861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.975917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.976251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.976472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.976516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.976801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.977089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.977118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.977401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.977648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.977678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.977904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.978107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.978133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.978370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.978567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.978591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.920 [2024-05-15 07:04:49.978877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.979131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.920 [2024-05-15 07:04:49.979171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.920 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.979418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.979614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.979639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.979840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.980036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.980062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.980265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.980518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.980543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.980768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.981073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.981097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.981300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.981500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.981525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.981743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.982008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.982032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.982303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.982549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.982573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.982772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.982954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.982993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.983204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.983448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.983473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.983713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.983913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.983947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.984139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.984343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.984367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.984697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.984922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.984952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.985142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.985346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.985369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.985566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.985777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.985804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.986035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.986244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.986267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.986441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.986622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.986645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.986857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.987080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.987109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.987345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.987547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.987572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.987860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.988067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.988092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.988266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.988464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.988490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.988808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.989027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.989053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.989250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.989460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.989484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.989710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.989934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.989960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.990128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.990325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.990349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.990552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.990819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.990842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.991038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.991265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.991289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.991489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.991708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.991733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.991946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.992196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.992219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.992418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.992603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.992626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.992850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.993036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.993061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.993235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.993420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.993444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.993678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.993846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.993870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.994037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.994237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.994261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.994526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.994795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.994821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.995066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.995315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.995342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.995550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.995791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.995817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.996040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.996242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.996267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.996471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.996696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.996720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.996908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.997127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.997152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.997353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.997593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.997617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.997836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.998005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.998031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.921 qpair failed and we were unable to recover it. 00:27:35.921 [2024-05-15 07:04:49.998263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.998474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.921 [2024-05-15 07:04:49.998497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:49.998735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:49.998959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:49.998984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:49.999162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:49.999372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:49.999397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:49.999559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:49.999778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:49.999802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.000023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.000219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.000244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.000448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.000838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.000877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.001082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.001281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.001308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.001499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.001697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.001722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.001924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.002106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.002131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.002340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.002519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.002543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.002714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.002911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.002943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.003152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.003345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.003369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.003593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.003794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.003818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.004032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.004211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.004236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.004438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.004667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.004691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.004949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.005165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.005189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.005367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.005568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.005593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.005793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.006020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.006045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.006274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.006467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.006491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.006682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.006861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.006887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.007099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.007276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.007301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.007492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.007745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.007770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.008045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.008209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.008233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.008438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.008614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.008639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.008823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.009046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.009071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.009253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.009479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.009504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.009696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.009894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.009923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.010135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.010308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.010333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.010532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.010709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.010733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.010938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.011163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.011188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.011364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.011554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.011578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.011781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.012000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.012025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.012254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.012457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.012483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.012659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.012847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.012872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.013041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.013239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.013265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.013446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.013677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.013702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.922 qpair failed and we were unable to recover it. 00:27:35.922 [2024-05-15 07:04:50.013941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.014139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.922 [2024-05-15 07:04:50.014164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.014371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.014566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.014591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.014791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.015016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.015041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.015266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.015457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.015482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.015654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.015882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.015906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.016123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.016321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.016345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.016547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.016750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.016780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.016984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.017212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.017247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.017480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.017704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.017746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.017973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.018180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.018207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.018408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.018628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.018656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.018866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.019092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.019122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.019365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.019551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.019579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.019807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.020033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.020063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.020290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.020509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.020536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.020780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.020960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.020985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.021214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.021415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.021442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.021666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.021885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.021913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.022120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.022312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.022336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.022566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.022758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.022785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.022973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.023193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.023220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.023448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.023671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.023699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.023926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.024132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.024159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.024406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.024616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.024643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.024836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.025037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.025067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.025282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.025608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.025655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.025884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.026158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.026187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.026412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.026642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.026669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.026918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.027146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.027173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.027417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.027593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.027617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.027841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.028078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.028103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.028328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.028677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.028725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.028973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.029165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.029193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.029417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.029613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.029638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.029839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.030103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.030128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.030350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.030665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.030714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.030946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.031194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.031222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.031449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.031793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.031844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.032060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.032293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.032320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.032575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.032969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.033017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.033247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.033433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.033462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.033712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.033945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.033978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.034208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.034488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.034540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.034764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.034982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.923 [2024-05-15 07:04:50.035010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.923 qpair failed and we were unable to recover it. 00:27:35.923 [2024-05-15 07:04:50.035223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.035434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.035462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.035682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.035908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.035940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.036192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.036462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.036487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.036710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.036946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.036974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.037197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.037420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.037449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.037674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.037872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.037899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.038155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.038380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.038407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.038629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.038852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.038884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.039119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.039322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.039347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.039516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.039720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.039744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.039966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.040162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.040189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.040389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.040588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.040612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.040836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.041062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.041087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.041288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.041454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.041481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.041653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.041833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.041861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.042091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.042325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.042376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.042570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.042769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.042793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.043015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.043233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.043258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.043438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.043634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.043662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.043886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.044076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.044104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.044317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.044566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.044591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.044827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.045052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.045081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.045308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.045513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.045537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.045728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.045955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.045983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.046198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.046380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.046409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.046600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.046800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.046829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.047081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.047338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.047365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.047562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.047791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.047815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.048022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.048224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.048253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.048475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.048824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.048881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.049089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.049286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.049314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.049511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.049733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.049760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.050014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.050261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.050290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.050487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.050701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.050728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.050964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.051190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.051217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.051411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.051653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.051680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.051921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.052146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.052173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.052394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.052648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.052675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.052893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.053100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.053127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.053325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.053745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.053794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.054019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.054350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.054405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.054641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.054833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.054860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.924 qpair failed and we were unable to recover it. 00:27:35.924 [2024-05-15 07:04:50.055091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.924 [2024-05-15 07:04:50.055413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.055456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.055690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.055888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.055912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.056118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.056419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.056485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.056708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.056900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.056927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.057151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.057342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.057371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.057566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.057794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.057822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.058048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.058274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.058301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.058499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.058728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.058755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.058972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.059153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.059182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.059382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.059752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.059802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.060001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.060197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.060221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.060450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.060675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.060699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.060921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.061148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.061176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.061396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.061569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.061593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.061825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.062039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.062067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.062293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.062469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.062493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.062696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.062872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.062902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.063107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.063324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.063351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.063548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.063843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.063896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.064146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.064347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.064372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.064539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.064920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.064985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.065205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.065516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.065578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.065793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.066075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.066103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.066349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.066602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.066626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.066844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.067064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.067092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.067317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.067628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.067695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.067917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.068123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.068147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.068400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.068621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.068645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.068814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.069007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.069032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.069230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.069579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.069637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.069835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.070090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.070115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.070332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.070577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.070604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.070829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.071020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.071045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.071262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.071480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.071505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.071751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.071951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.071976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.072230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.072425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.072452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.072677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.072925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.925 [2024-05-15 07:04:50.072959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.925 qpair failed and we were unable to recover it. 00:27:35.925 [2024-05-15 07:04:50.073190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.073618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.073670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.073894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.074104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.074132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.074376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.074673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.074724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.074917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.075148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.075174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.075397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.075646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.075670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.075917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.076150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.076177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.076414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.076612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.076639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.076893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.077098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.077126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.077333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.077642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.077701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.077952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.078180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.078207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.078433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.078683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.078722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.078975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.079200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.079227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.079426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.079731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.079795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.080022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.080263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.080290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.080482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.080738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.080762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.081021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.081350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.081406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.081630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.081850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.081877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.082104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.082315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.082341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.082570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.082799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.082826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.083062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.083304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.083328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.083502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.083726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.083750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.083991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.084213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.084243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.084465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.084653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.084682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.084913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.085095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.085120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.085303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.085511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.085575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.085806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.086016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.086044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.086275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.086527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.086577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.086979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.087184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.087209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.087460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.087662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.087689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.087912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.088137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.088167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.088393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.088685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.088738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.089007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.089189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.089213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.089456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.089648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.089675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.089874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.090116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.090141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.090342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.090748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.090800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.091049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.091224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.091248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.091478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.091707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.091734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.091954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.092152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.092175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.092405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.092747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.092797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.093016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.093218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.093245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.093461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.093689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.093716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.093954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.094173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.094201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.926 qpair failed and we were unable to recover it. 00:27:35.926 [2024-05-15 07:04:50.094431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.094756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.926 [2024-05-15 07:04:50.094813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.095045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.095215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.095255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.095488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.095704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.095732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.095978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.096204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.096228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.096454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.096668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.096696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.096918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.097142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.097169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.097429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.097715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.097742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.097962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.098187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.098215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.098441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.098659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.098686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.098944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.099166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.099191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.099407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.099615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.099639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.099880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.100059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.100085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.100289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.100506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.100533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.100753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.100956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.100981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.101174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.101421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.101459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.101662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.101886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.101912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.102145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.102340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.102367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.102566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.102734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.102759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.102984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.103216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.103243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.103470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.103719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.103744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.103911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.104137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.104165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.104423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.104688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.104715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.104943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.105133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.105158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.105393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.105556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.105582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.105780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.106005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.106033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.106231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.106513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.106563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.106780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.106999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.107024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.107194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.107410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.107437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.107635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.107850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.107878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.108074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.108301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.108326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.108521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.108840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.108895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.109149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.109364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.109391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.109613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.109830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.109857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.110074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.110332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.110358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.110584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.110960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.111010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.111203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.111445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.111472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.111719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.111951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.111976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.112175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.112477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.112534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.112789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.113011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.113043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.113269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.113622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.113688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.113949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.114148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.114176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.114402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.114621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.114648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.114896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.115131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.115156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.115370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.115617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.115666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.115913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.116132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.116159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.116402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.116650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.116675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.116903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.117140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.117167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.117390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.117693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.117718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.927 [2024-05-15 07:04:50.117968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.118160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.927 [2024-05-15 07:04:50.118184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.927 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.118401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.118604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.118633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.118887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.119141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.119166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.119414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.119616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.119641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.119847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.120066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.120094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.120286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.120509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.120539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.120741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.120965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.120993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.121215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.121458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.121485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.121736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.121958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.121984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.122148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.122375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.122402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.122600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.122819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.122845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.123077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.123269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.928 [2024-05-15 07:04:50.123297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:35.928 qpair failed and we were unable to recover it. 00:27:35.928 [2024-05-15 07:04:50.123500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.123789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.123839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.196 qpair failed and we were unable to recover it. 00:27:36.196 [2024-05-15 07:04:50.124065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.124282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.124309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.196 qpair failed and we were unable to recover it. 00:27:36.196 [2024-05-15 07:04:50.124558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.124746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.124774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.196 qpair failed and we were unable to recover it. 00:27:36.196 [2024-05-15 07:04:50.125001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.125176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.125201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.196 qpair failed and we were unable to recover it. 00:27:36.196 [2024-05-15 07:04:50.125403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.125640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.125704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.196 qpair failed and we were unable to recover it. 00:27:36.196 [2024-05-15 07:04:50.125957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.126177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.126202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.196 qpair failed and we were unable to recover it. 00:27:36.196 [2024-05-15 07:04:50.126415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.126789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.126836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.196 qpair failed and we were unable to recover it. 00:27:36.196 [2024-05-15 07:04:50.127035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.127260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.127285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.196 qpair failed and we were unable to recover it. 00:27:36.196 [2024-05-15 07:04:50.127482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.196 [2024-05-15 07:04:50.127668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.127695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.127886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.128076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.128103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.128303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.128525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.128552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.128749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.128946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.128974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.129173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.129358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.129387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.129581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.129797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.129825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.130054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.130345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.130403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.130626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.130874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.130901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.131131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.131540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.131588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.131810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.132038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.132066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.132284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.132563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.132623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.132868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.133064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.133091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.133318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.133630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.133655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.133864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.134042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.134068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.134268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.134517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.134541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.134775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.134997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.135026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.135230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.135480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.135507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.135726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.135920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.135953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.136138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.136331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.136357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.136617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.136918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.136979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.137174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.137379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.137407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.137642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.137852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.137904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.138145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.138334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.138359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.138561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.138765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.138791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.139011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.139210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.139238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.139431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.139622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.139649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.139869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.140055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.140082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.140323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.140593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.140618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.140792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.140966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.140990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.141199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.141562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.141620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.141843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.142070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.142101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.142328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.142641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.142699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.142936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.143198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.143230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.143463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.143634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.143659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.143859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.144064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.144093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.144317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.144511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.144539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.144786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.144980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.145007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.145208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.145405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.145434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.145659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.145868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.145895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.146118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.146314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.146341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.146536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.146721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.146748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.146971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.147191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.147216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.147390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.147598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.147626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.147825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.148051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.148079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.148310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.148527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.148555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.148771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.148993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.149021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.149221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.149442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.149469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.197 qpair failed and we were unable to recover it. 00:27:36.197 [2024-05-15 07:04:50.149678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.197 [2024-05-15 07:04:50.149873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.149901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.150126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.150360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.150409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.150610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.150808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.150834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.151077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.151308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.151360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.151562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.151812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.151839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.152051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.152249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.152274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.152529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.152730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.152760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.152979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.153171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.153201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.153450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.153675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.153702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.153920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.154133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.154163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.154382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.154772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.154821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.155057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.155368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.155426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.155648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.155844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.155870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.156079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.156277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.156301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.156476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.156698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.156727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.156941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.157148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.157175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.157400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.157725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.157776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.157972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.158171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.158198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.158391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.158608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.158635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.158883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.159082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.159110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.159332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.159629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.159686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.159943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.160116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.160141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.160372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.160600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.160625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.160851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.161104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.161132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.161359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.161702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.161755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.161966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.162180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.162210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.162442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.162611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.162637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.162835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.163159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.163189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.163412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.163596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.163625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.163874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.164055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.164081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.164289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.164534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.164562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.164758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.164955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.164983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.165210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.165434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.165463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.165689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.165911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.165943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.166139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.166341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.166369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.166589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.166764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.166789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.166994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.167181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.167216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.167422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.167618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.167645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.167846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.168062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.168091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.168325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.168717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.168766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.168959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.169184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.169211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.169467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.169715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.169743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.169938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.170193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.170219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.170454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.170736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.170792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.171017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.171208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.171237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.171470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.171680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.171704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.171903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.172080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.172108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.172287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.172450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.172475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.172644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.172812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.172837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.198 qpair failed and we were unable to recover it. 00:27:36.198 [2024-05-15 07:04:50.173040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.198 [2024-05-15 07:04:50.173261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.173285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.173497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.173802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.173859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.174068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.174332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.174383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.174617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.174809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.174836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.175038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.175254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.175319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.175552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.175841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.175869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.176101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.176271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.176295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.176470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.176654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.176685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.176942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.177162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.177192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.177422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.177640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.177700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.177952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.178179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.178207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.178444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.178634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.178664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.178916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.179119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.179145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.179385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.179614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.179644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.179860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.180082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.180110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.180364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.180588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.180616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.180848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.181043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.181070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.181262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.181519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.181568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.181765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.181995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.182021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.182208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.182455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.182482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.182706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.182896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.182925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.183156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.183375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.183402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.183590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.183838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.183866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.184081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.184298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.184325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.184520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.184766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.184793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.184996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.185200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.185225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.185406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.185582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.185607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.185827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.186029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.186057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.186305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.186565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.186591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.186781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.186999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.187027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.187250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.187568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.187624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.187835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.188023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.188050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.188257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.188528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.188577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.188822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.189015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.189040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.189217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.189419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.189443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.189671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.189889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.189916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.190144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.190365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.190391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.190650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.190905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.190940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.191165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.191557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.191615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.199 qpair failed and we were unable to recover it. 00:27:36.199 [2024-05-15 07:04:50.191845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.199 [2024-05-15 07:04:50.192054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.192079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.192257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.192470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.192515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.192747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.192970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.193001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.193230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.193451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.193475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.193679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.193878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.193906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.194135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.194325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.194384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.194608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.194831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.194858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.195092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.195330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.195357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.195554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.195833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.195886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.196123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.196343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.196371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.196574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.196829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.196856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.197087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.197334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.197385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.197612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.197799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.197825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.198040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.198298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.198350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.198567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.198799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.198829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.199067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.199288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.199317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.199552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.199753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.199778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.200015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.200268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.200293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.200507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.200842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.200897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.201169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.201415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.201442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.201684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.201876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.201901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.202104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.202305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.202329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.202502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.202712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.202738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.202945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.203118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.203143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.203341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.203568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.203612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.203817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.204042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.204070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.204271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.204523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.204572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.204801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.204999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.205027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.205254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.205479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.205507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.205736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.205926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.205959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.206158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.206380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.206411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.206609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.206804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.206831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.207097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.207310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.207340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.207601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.207796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.207823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.208048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.208262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.208291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.208502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.208703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.208734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.208965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.209193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.209220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.209439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.209707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.209752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.209979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.210175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.210203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.210393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.210581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.210611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.210829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.211086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.211111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.211307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.211551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.211578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.211805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.211968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.212011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.212237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.212436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.212461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.212656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.212873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.212900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.213115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.213334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.213361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.213575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.213904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.213972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.214158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.214420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.214462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.214678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.214901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.200 [2024-05-15 07:04:50.214928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.200 qpair failed and we were unable to recover it. 00:27:36.200 [2024-05-15 07:04:50.215173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.215355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.215381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.215573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.215799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.215827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.216073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.216270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.216297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.216514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.216818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.216867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.217088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.217411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.217470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.217716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.217941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.217968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.218197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.218438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.218464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.218661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.218911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.218948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.219192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.219416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.219440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.219637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.219896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.219920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.220134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.220394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.220444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.220691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.220941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.220971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.221171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.221363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.221387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.221592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.221816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.221843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.222047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.222361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.222409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.222637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.222861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.222888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.223126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.223394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.223421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.223642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.223853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.223880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.224105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.224444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.224507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.224702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.224941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.224969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.225156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.225480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.225540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.225772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.226002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.226030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.226248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.226468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.226493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.226717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.226949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.226975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.227178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.227500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.227562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.227806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.228027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.228052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.228248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.228455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.228479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.228741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.228967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.228995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.229221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.229420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.229447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.229667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.229888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.229915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.230153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.230348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.230375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.230591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.230809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.230836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.231052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.231277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.231307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.231535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.231751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.231778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.232001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.232224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.232248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.232477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.232734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.232782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.233025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.233452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.233506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.233724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.233975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.234002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.234222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.234411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.234435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.234657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.234919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.234957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.235192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.235611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.235662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.235883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.236085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.236109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.236285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.236606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.236656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.236915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.237144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.237171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.237385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.237671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.237723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.237967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.238156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.238184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.238409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.238633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.238657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.238879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.239130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.239155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.239357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.239550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.239574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.239793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.240021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.240046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.201 qpair failed and we were unable to recover it. 00:27:36.201 [2024-05-15 07:04:50.240281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.240498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.201 [2024-05-15 07:04:50.240525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.240740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.240967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.240995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.241184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.241412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.241437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.241627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.241815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.241841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.242045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.242260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.242287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.242516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.242707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.242733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.242980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.243195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.243222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.243436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.243655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.243681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.243875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.244098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.244128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.244353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.244575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.244600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.244766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.244985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.245012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.245249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.245517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.245560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.245745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.245964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.245994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.246194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.246503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.246566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.246764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.246984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.247012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.247208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.247401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.247430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.247658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.247884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.247911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.248143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.248555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.248605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.248804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.249048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.249076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.249300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.249579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.249631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.249831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.250050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.250080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.250277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.250467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.250494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.250740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.250953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.250982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.251182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.251432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.251459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.251680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.251926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.251958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.252150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.252373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.252398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.252600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.252802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.252828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.253067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.253290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.253319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.253570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.253908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.253985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.254210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.254503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.254528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.254778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.254992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.255021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.255246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.255438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.255467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.255714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.255950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.255974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.256194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.256619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.256669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.256905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.257169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.257196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.257458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.257676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.257699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.257901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.258134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.258159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.258358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.258550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.258579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.258834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.259074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.259103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.259321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.259594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.259618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.259802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.260039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.260067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.260295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.260587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.260650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.260846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.261065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.261093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.261311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.261625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.202 [2024-05-15 07:04:50.261684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.202 qpair failed and we were unable to recover it. 00:27:36.202 [2024-05-15 07:04:50.261900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.262138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.262166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.262385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.262686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.262740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.262971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.263177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.263204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.263387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.263575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.263602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.263815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.263994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.264031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.264276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.264469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.264498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.264683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.264912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.264941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.265184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.265528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.265577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.265797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.265972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.265997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.266217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.266463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.266490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.266722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.266935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.266963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.267168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.267368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.267392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.267618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.267785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.267828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.268046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.268237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.268266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.268511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.268715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.268739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.268912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.269103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.269131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.269355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.269594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.269621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.269836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.270065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.270090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.270320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.270504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.270530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.270762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.270991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.271024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.271221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.271445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.271469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.271696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.271943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.271971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.272170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.272395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.272421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.272621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.273001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.273029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.273262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.273510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.273536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.273764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.273991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.274021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.274246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.274531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.274577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.274765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.275019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.275044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.275246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.275486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.275535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.275765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.275964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.275992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.276221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.276426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.276450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.276710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.276937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.276965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.277167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.277389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.277416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.277626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.277785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.277809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.278057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.278275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.278302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.278526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.278703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.278729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.278933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.279127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.279154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.279340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.279556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.279582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.279777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.279953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.279997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.280226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.280416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.280441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.280646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.280898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.280925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.281154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.281476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.281521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.281743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.281952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.281982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.282229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.282404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.282428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.282596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.282800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.282825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.283055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.283300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.283327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.283528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.283776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.283804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.284032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.284247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.284273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.284527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.284851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.284901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.203 qpair failed and we were unable to recover it. 00:27:36.203 [2024-05-15 07:04:50.285132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.203 [2024-05-15 07:04:50.285328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.285356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.285605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.285827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.285854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.286072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.286353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.286409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.286656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.286844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.286871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.287084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.287259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.287284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.287511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.287739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.287763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.287987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.288210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.288234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.288409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.288714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.288762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.288955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.289193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.289220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.289468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.289855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.289909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.290136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.290511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.290560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.290823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.291034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.291068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.291274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.291578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.291607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.291847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.292074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.292102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.292307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.292529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.292554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.292771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.292982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.293011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.293231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.293487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.293535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.293779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.294076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.294103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.294303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.294524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.294552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.294799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.295003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.295030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.295254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.295630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.295675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.295891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.296102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.296131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.296357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.296600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.296627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.296817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.297003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.297031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.297251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.297487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.297538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.297787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.297985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.298013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.298262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.298514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.298565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.298755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.298988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.299016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.299254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.299535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.299589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.299804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.300022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.300050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.300305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.300473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.300497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.300718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.300944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.300977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.301235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.301494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.301521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.301720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.301970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.301998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.302193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.302396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.302420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.302594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.302815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.302841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.303088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.303289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.303316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.303547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.303763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.303812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.304037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.304260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.304287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.304502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.304794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.304821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.305077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.305336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.305387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.305613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.305824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.305851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.306091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.306310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.306337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.306581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.306845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.306869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.307146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.307492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.307546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.307806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.307998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.308023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.308201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.308427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.308451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.308677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.308890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.308917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.204 [2024-05-15 07:04:50.309185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.309429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.204 [2024-05-15 07:04:50.309456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.204 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.309713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.309958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.309986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.310202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.310407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.310430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.310608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.310825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.310853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.311106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.311308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.311335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.311516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.311745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.311770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.311939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.312150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.312177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.312392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.312595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.312619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.312846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.313055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.313083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.313289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.313467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.313492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.313682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.313902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.313936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.314135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.314332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.314359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.314580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.314826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.314874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.315110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.315292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.315316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.315539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.315762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.315791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.315983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.316176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.316203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.316423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.316724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.316782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.317008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.317231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.317258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.317479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.317706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.317730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.317946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.318145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.318172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.318393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.318709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.318766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.319027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.319227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.319251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.319429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.319662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.319714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.319947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.320172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.320196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.320417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.320749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.320804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.321024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.321223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.321250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.321442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.321625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.321652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.321874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.322117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.322142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.322337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.322656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.322721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.322966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.323166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.323193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.323416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.323597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.323624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.323845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.324089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.324114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.324286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.324572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.324622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.324848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.325069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.325097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.325313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.325529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.325561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.325784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.325987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.326012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.326233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.326477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.326501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.326718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.326947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.326972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.327198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.327387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.327413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.327633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.327849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.327878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.328138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.328396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.328423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.205 qpair failed and we were unable to recover it. 00:27:36.205 [2024-05-15 07:04:50.328650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.328893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.205 [2024-05-15 07:04:50.328920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.329172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.329562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.329620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.329865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.330064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.330088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.330339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.330593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.330617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.330788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.330953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.330996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.331202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.331556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.331614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.331812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.332052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.332077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.332354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.332753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.332815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.333082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.333465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.333511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.333711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.333939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.333964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.334192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.334408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.334435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.334626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.334826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.334850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.335074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.335486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.335534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.335754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.335942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.335969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.336159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.336375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.336402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.336625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.336873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.336901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.337125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.337302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.337326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.337497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.337797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.337865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.338075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.338265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.338292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.338513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.338694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.338721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.338947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.339149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.339173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.339424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.339653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.339677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.339875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.340096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.340123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.340340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.340541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.340565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.340770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.341088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.341115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.341346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.341543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.341569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.341770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.341965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.341994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.342221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.342392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.342416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.342615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.342788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.342812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.342983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.343211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.343236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.343461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.343678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.343704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.343904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.344107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.344132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.344359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.344793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.344847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.345085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.345282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.345306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.345469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.345733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.345761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.345985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.346182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.346209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.346433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.346629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.346656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.346876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.347070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.347098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.347343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.347599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.347624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.347843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.348095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.348120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.348318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.348694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.348748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.348975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.349179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.349203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.349397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.349592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.349620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.349840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.350104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.350133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.350342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.350532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.350559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.350768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.350974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.206 [2024-05-15 07:04:50.351001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.206 qpair failed and we were unable to recover it. 00:27:36.206 [2024-05-15 07:04:50.351229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.351476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.351503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.351761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.352010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.352038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.352236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.352453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.352481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.352687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.352873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.352897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.353069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.353381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.353433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.353690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.353942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.353967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.354160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.354366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.354393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.354637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.354891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.354915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.355110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.355318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.355345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.355599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.356005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.356033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.356239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.356462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.356487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.356722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.356971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.357000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.357203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.357409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.357434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.357611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.357823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.357851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.358084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.358274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.358301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.358526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.358976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.359004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.359208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.359407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.359431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.359692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.359907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.359940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.360134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.360331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.360360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.360592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.360817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.360846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.361095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.361300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.361324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.361519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.361814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.361875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.362099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.362418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.362472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.362691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.362914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.362956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.363183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.363403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.363427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.363626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.363851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.363878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.364095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.364270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.364295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.364469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.364669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.364694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.364871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.365071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.365099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.365322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.365515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.365542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.365757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.365979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.366007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.366228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.366432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.366456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.366678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.366922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.366954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.367186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.367358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.367382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.367581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.367782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.367806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.368035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.368250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.368277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.368467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.368680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.368707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.368954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.369210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.369238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.369491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.369710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.369737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.370007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.370211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.370240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.370486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.370710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.370737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.370961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.371182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.371210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.371427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.371627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.371653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.371825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.372022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.207 [2024-05-15 07:04:50.372048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.207 qpair failed and we were unable to recover it. 00:27:36.207 [2024-05-15 07:04:50.372246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.372529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.372553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.372747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.372965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.372993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.373217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.373464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.373491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.373700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.373885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.373912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.374143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.374359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.374386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.374606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.374828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.374859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.375090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.375261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.375285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.375512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.375805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.375832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.376085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.376276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.376305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.376555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.376723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.376748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.376975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.377199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.377226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.377450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.377791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.377851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.378055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.378330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.378380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.378606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.379055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.379084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.379284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.379499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.379528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.379781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.380043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.380075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.380296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.380521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.380545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.380747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.380974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.381004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.381248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.381585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.381641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.381843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.382066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.382093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.382285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.382526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.382576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.382795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.383020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.383047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.383292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.383535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.383563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.383810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.383996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.384025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.384249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.384465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.384492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.384694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.384880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.384907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.385166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.385548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.385601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.385847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.386069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.386097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.386301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.386502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.386529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.386757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.386965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.386995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.387229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.387525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.387574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.387764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.387982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.388010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.388231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.388481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.388510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.388702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.388934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.388963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.389214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.389434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.389461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.389655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.389911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.389947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.390185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.390383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.390409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.390636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.390961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.391013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.391211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.391436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.391464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.391678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.391875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.391902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.392145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.392348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.392372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.392596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.392968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.392996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.393226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.393422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.393449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.393649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.393866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.393893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.394137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.394331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.208 [2024-05-15 07:04:50.394359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.208 qpair failed and we were unable to recover it. 00:27:36.208 [2024-05-15 07:04:50.394572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.394889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.394944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.395147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.395542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.395593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.395812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.395993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.396019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.396218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.396412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.396442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.396685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.396872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.396897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.397101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.397277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.397302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.397481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.397680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.397707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.397905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.398131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.398160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.398398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.398684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.398735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.398965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.399206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.399230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.399473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.399663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.399692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.399892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.400089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.400118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.400352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.400573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.400600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.400815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.401010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.401038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.401242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.401450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.401475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.401712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.401934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.401962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.402146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.402369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.402395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.402571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.402831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.402886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.403116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.403335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.403363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.403574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.403838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.403888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.404100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.404306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.404333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.404551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.404831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.404861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.405042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.405248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.405273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.405484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.405737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.405787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.405997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.406216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.406244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.406470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.406861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.406913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.407124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.407391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.407441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.407637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.407834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.407864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.408090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.408260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.408284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.408462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.408692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.408716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.408955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.409155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.409184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.409413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.409671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.409696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.409900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.410106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.410137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.410354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.410699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.410753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.410976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.411208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.411239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.411462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.411647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.411714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.411947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.412172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.412200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.412391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.412691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.412758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.413010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.413211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.413240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.413454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.413705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.413730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.413954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.414179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.414205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.414433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.414647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.414709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.414968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.415188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.415216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.415436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.415632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.415657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.415832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.416006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.416032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.416238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.416553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.416610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.416829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.417050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.417081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.417301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.417528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.417556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.417792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.418007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.418036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.209 qpair failed and we were unable to recover it. 00:27:36.209 [2024-05-15 07:04:50.418232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.418454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.209 [2024-05-15 07:04:50.418482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.210 qpair failed and we were unable to recover it. 00:27:36.210 [2024-05-15 07:04:50.418701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.210 [2024-05-15 07:04:50.418939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.210 [2024-05-15 07:04:50.418968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.210 qpair failed and we were unable to recover it. 00:27:36.210 [2024-05-15 07:04:50.419188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.210 [2024-05-15 07:04:50.419505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.210 [2024-05-15 07:04:50.419555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.210 qpair failed and we were unable to recover it. 00:27:36.210 [2024-05-15 07:04:50.419781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.419978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.420006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.420202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.420428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.420456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.420659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.420891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.420920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.421157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.421381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.421410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.421642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.421847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.421874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.422075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.422299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.422324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.422556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.422925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.422981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.423231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.423574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.423625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.423848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.424073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.424101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.424300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.424516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.424544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.424740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.424957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.424984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.425240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.425515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.425543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.425733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.425921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.425955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.426152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.426353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.426378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.426573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.426803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.426828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.427038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.427257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.427328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.427579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.427847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.427878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.428125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.428398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.428422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.428622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.428837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.428864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.429087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.429284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.429312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.429567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.429833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.429889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.430098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.430412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.430482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.430675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.430927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.430962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.431159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.431390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.431415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.431607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.431776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.479 [2024-05-15 07:04:50.431801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.479 qpair failed and we were unable to recover it. 00:27:36.479 [2024-05-15 07:04:50.431997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.432224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.432252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.432479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.432702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.432730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.432960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.433220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.433248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.433475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.433699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.433726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.433947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.434168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.434192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.434418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.434704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.434729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.434896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.435106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.435132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.435333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.435683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.435743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.435979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.436172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.436200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.436412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.436637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.436664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.436862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.437066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.437094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.437287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.437543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.437598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.437827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.438029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.438054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.438281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.438654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.438704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.438942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.439138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.439165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.439382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.439592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.439619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.439868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.440066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.440095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.440329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.440509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.440533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.440711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.440891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.440916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.441112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.441425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.441496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.441703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.441901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.441925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.442111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.442347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.442399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.442617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.442806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.442833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.443054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.443356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.443413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.443623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.443824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.443848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.444050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.444299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.444327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.444554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.444845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.444904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.445135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.445396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.445421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.445618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.445842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.445868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.480 qpair failed and we were unable to recover it. 00:27:36.480 [2024-05-15 07:04:50.446095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.446307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.480 [2024-05-15 07:04:50.446334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.446553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.446754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.446781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.446984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.447199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.447227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.447474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.447670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.447697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.447915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.448125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.448153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.448344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.448571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.448596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.448826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.449111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.449137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.449307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.449511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.449535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.449765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.449992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.450020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.450248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.450466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.450494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.450723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.450904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.450933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.451110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.451287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.451311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.451518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.451713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.451740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.451965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.452164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.452189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.452365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.452535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.452560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.452756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.452945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.452976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.453183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.453375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.453402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.453592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.453793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.453825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.454045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.454314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.454341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.454592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.454897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.454969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.455192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.455406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.455433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.455679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.455906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.455937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.456145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.456392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.456443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.456696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.456883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.456910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.457129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.457378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.457403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.457574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.457746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.457770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.457998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.458218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.458243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.458406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.458630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.458657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.458909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.459121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.459145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.459360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.459695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.459753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.481 qpair failed and we were unable to recover it. 00:27:36.481 [2024-05-15 07:04:50.459968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.481 [2024-05-15 07:04:50.460162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.460189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.460403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.460631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.460658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.460846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.461077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.461102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.461279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.461484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.461535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.461780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.461984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.462012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.462260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.462529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.462556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.462752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.462967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.462995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.463217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.463579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.463632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.463893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.464091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.464118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.464340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.464581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.464608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.464828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.465051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.465078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.465316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.465523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.465550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.465764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.466017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.466042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.466213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.466439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.466466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.466704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.466901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.466934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.467159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.467345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.467371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.467628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.467910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.467939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.468164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.468377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.468405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.468641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.468841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.468865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.469093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.469346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.469373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.469621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.469838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.469867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.470083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.470443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.470499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.470727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.470945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.470973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.471209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.471429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.471457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.471641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.471864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.471889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.472074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.472295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.472322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.472517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.472827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.472885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.473126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.473354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.473380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.473631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.473824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.473851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.474084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.474300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.474326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.482 qpair failed and we were unable to recover it. 00:27:36.482 [2024-05-15 07:04:50.474517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.482 [2024-05-15 07:04:50.474762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.474787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.474983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.475210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.475238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.475422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.475599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.475626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.475851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.476047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.476077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.476288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.476503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.476530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.476752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.476945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.476970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.477197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.477611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.477660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.477873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.478097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.478123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.478326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.478549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.478577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.478744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.478957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.478985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.479234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.479552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.479611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.479864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.480055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.480083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.480304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.480521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.480548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.480771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.480941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.480966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.481147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.481372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.481398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.481623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.481827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.481854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.482102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.482290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.482317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.482535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.482700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.482724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.482977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.483174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.483205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.483447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.483803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.483852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.484083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.484282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.484308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.484521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.484743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.484769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.484994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.485217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.485244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.485435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.485692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.485740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.483 qpair failed and we were unable to recover it. 00:27:36.483 [2024-05-15 07:04:50.485988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.483 [2024-05-15 07:04:50.486173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.486201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.486395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.486617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.486642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.486901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.487128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.487155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.487368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.487675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.487700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.487926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.488163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.488189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.488441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.488811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.488861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.489102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.489375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.489399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.489568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.489794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.489819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.490021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.490209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.490237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.490464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.490738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.490787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.491008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.491333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.491395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.491612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.491837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.491864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.492082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.492307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.492334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.492545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.492865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.492917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.493144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.493365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.493389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.493566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.493762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.493786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.493989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.494208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.494235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.494452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.494670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.494697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.494893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.495095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.495123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.495321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.495525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.495549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.495779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.496032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.496060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.496279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.496551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.496611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.496910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.497136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.497163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.497383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.497741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.497792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.498042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.498284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.498344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.498573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.498851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.498902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.499114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.499363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.499389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.499580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.499887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.499957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.500143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.500340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.500364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.500588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.500993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.484 [2024-05-15 07:04:50.501025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.484 qpair failed and we were unable to recover it. 00:27:36.484 [2024-05-15 07:04:50.501249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.501499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.501548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.501802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.502030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.502058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.502274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.502505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.502529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.502809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.503006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.503034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.503256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.503477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.503504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.503733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.503953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.503981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.504203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.504426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.504450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.504652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.504873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.504900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.505131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.505325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.505354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.505543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.505766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.505794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.506021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.506248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.506275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.506497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.506684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.506712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.506942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.507166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.507193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.507426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.507765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.507815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.508014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.508323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.508386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.508631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.508822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.508853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.509087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.509402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.509461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.509715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.509914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.509950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.510198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.510553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.510611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.510833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.511090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.511118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.511331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.511524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.511551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.511737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.511952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.511980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.512203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.512424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.512451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.512649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.512900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.512927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.513154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.513599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.513646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.513876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.514086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.514112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.514373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.514568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.514595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.514815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.515038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.515066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.515317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.515562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.515618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.485 [2024-05-15 07:04:50.515861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.516050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.485 [2024-05-15 07:04:50.516075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.485 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.516292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.516540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.516566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.516766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.517007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.517034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.517234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.517457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.517484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.517703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.517947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.517975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.518180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.518556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.518585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.518836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.519067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.519096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.519354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.519605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.519630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.519907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.520179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.520207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.520457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.520675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.520702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.520924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.521175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.521201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.521450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.521670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.521697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.521889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.522124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.522149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.522352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.522545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.522569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.522782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.522990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.523016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.523218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.523434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.523458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.523709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.523910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.523941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.524154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.524352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.524376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.524625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.524872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.524901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.525134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.525513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.525562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.525778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.525975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.526003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.526225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.526424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.526451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.526669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.526911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.526943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.527142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.527334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.527361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.527614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.527832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.527859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.528049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.528260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.528288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.528536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.528737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.528761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.528938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.529145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.529170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.529381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.529652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.529679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.529981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.530212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.530239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.486 qpair failed and we were unable to recover it. 00:27:36.486 [2024-05-15 07:04:50.530456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.530821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.486 [2024-05-15 07:04:50.530873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.531110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.531462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.531512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.531778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.531981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.532009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.532225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.532447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.532474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.532695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.532940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.532967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.533199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.533426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.533451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.533675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.533897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.533924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.534152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.534329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.534361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.534615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.534804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.534831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.535076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.535305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.535329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.535525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.535845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.535891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.536140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.536375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.536399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.536584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.536826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.536852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.537058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.537380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.537438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.537660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.537886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.537913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.538142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.538324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.538350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.538570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.538750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.538779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.539009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.539241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.539265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.539485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.539673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.539700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.539919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.540149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.540176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.540401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.540792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.540840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.541065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.541437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.541482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.541708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.541962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.541988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.542192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.542425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.542449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.542677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.542904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.542935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.543111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.543314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.543341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.543518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.543733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.543760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.544013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.544212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.544239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.544463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.544837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.544887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.545116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.545405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.487 [2024-05-15 07:04:50.545464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.487 qpair failed and we were unable to recover it. 00:27:36.487 [2024-05-15 07:04:50.545709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.545896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.545923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.546159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.546359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.546383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.546607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.546834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.546860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.547089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.547298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.547322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.547544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.547844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.547899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.548132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.548318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.548343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.548545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.548824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.548878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.549101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.549291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.549347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.549569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.549880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.549948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.550173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.550389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.550415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.550636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.550889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.550913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.551124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.551384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.551437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.551640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.551891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.551918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.552176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.552408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.552435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.552625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.553003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.553031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.553262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.553655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.553717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.553965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.554157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.554184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.554410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.554616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.554641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.554846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.555074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.555102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.555316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.555537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.555564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.555787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.556014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.556042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.556288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.556512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.556539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.556759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.556983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.557011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.557224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.557417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.557442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.557667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.557881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.557907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.488 qpair failed and we were unable to recover it. 00:27:36.488 [2024-05-15 07:04:50.558186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.558435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.488 [2024-05-15 07:04:50.558466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.558727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.558918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.558957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.559159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.559393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.559423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.559683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.559911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.559953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.560176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.560371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.560396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.560592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.560819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.560847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.561102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.561326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.561353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.561581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.561786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.561813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.562069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.562314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.562341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.562587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.562770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.562797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.562995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.563171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.563195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.563414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.563662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.563686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.563913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.564146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.564171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.564348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.564576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.564608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.564830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.565087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.565112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.565334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.565547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.565573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.565778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.566018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.566044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.566286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.566531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.566555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.566721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.566886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.566910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.567085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.567329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.567356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.567573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.567823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.567847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.568087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.568336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.568363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.568590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.568786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.568815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.569053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.569308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.569340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.569552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.569772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.569798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.570023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.570266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.570293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.570537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.570840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.570906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.571166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.571349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.571373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.571575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.571765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.571793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.572014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.572201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.489 [2024-05-15 07:04:50.572228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.489 qpair failed and we were unable to recover it. 00:27:36.489 [2024-05-15 07:04:50.572447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.572698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.572725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.572944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.573169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.573196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.573423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.573619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.573643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.573874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.574066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.574098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.574357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.574645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.574671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.574900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.575092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.575121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.575340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.575557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.575586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.575835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.576060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.576090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.576333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.576565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.576591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.576890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.577124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.577149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.577341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.577514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.577538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.577733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.577955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.577985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.578237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.578682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.578734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.578957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.579206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.579233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.579446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.579689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.579716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.579939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.580184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.580211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.580468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.580744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.580771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.581024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.581230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.581255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.581632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.581835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.581861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.582086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.582337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.582364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.582551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.582775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.582800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.583021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.583240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.583268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.583491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.583784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.583844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.584064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.584282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.584309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.584536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.584712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.584737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.584910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.585103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.585133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.585354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.585557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.585585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.585809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.586053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.586081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.586332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.586592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.586617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.490 qpair failed and we were unable to recover it. 00:27:36.490 [2024-05-15 07:04:50.586810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.587056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.490 [2024-05-15 07:04:50.587082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.587320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.587541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.587568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.587815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.588032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.588060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.588259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.588440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.588469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.588690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.588923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.588953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.589153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.589382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.589412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.589610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.589853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.589877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.590075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.590260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.590284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.590510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.590678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.590702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.590890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.591117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.591145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.591368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.591670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.591697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.591904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.592162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.592198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.592448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.592641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.592672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.592876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.593111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.593146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.593371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.593572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.593604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.593791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.593975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.594001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.594176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.594372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.594396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.594593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.594772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.594799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.595010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.595235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.595262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.595481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.595729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.595753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.595923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.596114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.596142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.596365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.596593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.596618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.596860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.597075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.597100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.597299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.597460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.597484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.597684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.597881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.597908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.598139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.598323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.598350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.598601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.598823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.598850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.599082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.599277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.599304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.599512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.599708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.599735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.599955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.600185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.491 [2024-05-15 07:04:50.600213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.491 qpair failed and we were unable to recover it. 00:27:36.491 [2024-05-15 07:04:50.600436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.600661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.600685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.600883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.601071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.601100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.601309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.601503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.601531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.601750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.601940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.601968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.602155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.602402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.602429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.602623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.602810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.602840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.603064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.603322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.603349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.603602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.603784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.603811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.604035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.604278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.604307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.604493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.604737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.604765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.605032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.605259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.605283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.605466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.605684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.605713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.605944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.606117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.606142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.606314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.606599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.606628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.606848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.607098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.607124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.607375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.607599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.607624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.607881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.608105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.608134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.608380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.608633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.608660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.608905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.609096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.609124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.609346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.609589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.609616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.609834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.610028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.610055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.610281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.610597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.610652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.610876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.611103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.611131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.611332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.611667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.492 [2024-05-15 07:04:50.611721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.492 qpair failed and we were unable to recover it. 00:27:36.492 [2024-05-15 07:04:50.611944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.612161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.612186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.612371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.612601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.612629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.612854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.613111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.613136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.613330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.613610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.613657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.613882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.614106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.614134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.614339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.614560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.614587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.614825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.615045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.615074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.615265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.615451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.615480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.615738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.615957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.615982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.616268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.616657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.616706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.616923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.617116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.617144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.617366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.617588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.617615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.617825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.618020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.618048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.618297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.618645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.618695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.619002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.619241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.619268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.619501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.619747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.619774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.620039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.620265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.620289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.620508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.620673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.620699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.620946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.621163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.621191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.621441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.621633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.621661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.621868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.622091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.622117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.622341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.622600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.622625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.622850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.623063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.493 [2024-05-15 07:04:50.623091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.493 qpair failed and we were unable to recover it. 00:27:36.493 [2024-05-15 07:04:50.623337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.623561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.623588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.623818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.624014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.624040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.624207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.624408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.624447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.624654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.624853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.624877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.625097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.625284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.625313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.625515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.625700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.625726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.625981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.626215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.626242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.626475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.626689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.626716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.626946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.627228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.627256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.627477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.627874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.627925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.628128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.628383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.628408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.628635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.628856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.628880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.629097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.629306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.629334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.629527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.629719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.629746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.629961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.630180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.630207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.630417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.630763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.630819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.631048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.631267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.631294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.631543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.631794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.631821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.632034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.632212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.632236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.632457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.632705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.632732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.632924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.633173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.633200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.633386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.633555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.633579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.633806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.634062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.634090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.634307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.634549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.634576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.634795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.635022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.635050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.494 qpair failed and we were unable to recover it. 00:27:36.494 [2024-05-15 07:04:50.635236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.635459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.494 [2024-05-15 07:04:50.635486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.635711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.635895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.635922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.636181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.636406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.636433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.636644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.636844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.636877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.637100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.637324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.637351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.637606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.637825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.637871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.638065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.638282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.638310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.638510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.638702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.638729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.638956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.639180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.639205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.639394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.639592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.639618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.639806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.640005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.640030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.640204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.640392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.640419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.640607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.640831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.640859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.641076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.641331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.641360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.641535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.641750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.641777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.641966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.642188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.642215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.642436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.642660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.642684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.642876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.643119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.643145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.643344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.643532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.643562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.643833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.644065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.644093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.644315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.644541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.644568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.644785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.645085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.645114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.645315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.645541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.645565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.645739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.645966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.645999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.646193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.646439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.646466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.646689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.646902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.646934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.647122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.647365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.647392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.647608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.647821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.647848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.495 qpair failed and we were unable to recover it. 00:27:36.495 [2024-05-15 07:04:50.648066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.495 [2024-05-15 07:04:50.648287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.648314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.648513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.648682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.648725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.648920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.649144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.649171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.649366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.649623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.649650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.649846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.650025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.650050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.650241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.650438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.650472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.650669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.650913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.650952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.651161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.651350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.651377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.651599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.651762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.651786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.652025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.652242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.652269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.652477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.652672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.652699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.652888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.653086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.653114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.653299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.653488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.653515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.653706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.653876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.653901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.654107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.654304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.654333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.654533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.654744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.654771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.654985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.655183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.655210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.655420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.655602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.655629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.655850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.656066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.656096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.656312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.656480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.656505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.656712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.656886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.656910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.657131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.657365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.657396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.657593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.657817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.657845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.658118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.658362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.658392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.658609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.658841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.658866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.659065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.659320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.659349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.659587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.659875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.659899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.496 qpair failed and we were unable to recover it. 00:27:36.496 [2024-05-15 07:04:50.660114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.496 [2024-05-15 07:04:50.660285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.660310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.660516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.660827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.660883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.661124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.661328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.661353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.661533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.661729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.661755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.661926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.662105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.662129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.662327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.662523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.662550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.662748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.662981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.663025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.663228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.663505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.663560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.663780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.664055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.664080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.664294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.664500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.664529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.664752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.664973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.665017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.665199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.665420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.665471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.665690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.665885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.665913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.666114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.666338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.666366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.666593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.666771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.666798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.667019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.667197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.667222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.667444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.667658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.667683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.667902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.668107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.668132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.668362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.668581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.668609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.668804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.669002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.669046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.497 [2024-05-15 07:04:50.669224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.669548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.497 [2024-05-15 07:04:50.669609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.497 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.669808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.670048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.670073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.670279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.670457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.670483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.670683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.670913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.670946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.671152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.671336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.671360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.671584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.671853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.671902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.672139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.672344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.672372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.672702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.672906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.672941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.673174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.673479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.673505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.673734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.673940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.673986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.674164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.674401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.674430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.674629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.674846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.674874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.675102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.675318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.675346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.675707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.675944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.675972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.676163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.676361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.676389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.676600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.676903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.676964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.677161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.677451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.677502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.677752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.677944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.677971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.678191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.678417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.678470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.678799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.679036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.679067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.679274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.679444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.679469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.679676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.679893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.679921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.680137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.680357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.680382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.680612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.680806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.680833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.681036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.681260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.681290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.681490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.681851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.681907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.682131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.682337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.682364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.682553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.682818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.682868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.683133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.683297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.498 [2024-05-15 07:04:50.683322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.498 qpair failed and we were unable to recover it. 00:27:36.498 [2024-05-15 07:04:50.683519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.683710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.683735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.683998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.684228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.684252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.684472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.684692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.684722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.684951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.685150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.685175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.685380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.685681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.685739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.685961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.686146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.686173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.686369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.686548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.686573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.686775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.686995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.687021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.687224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.687459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.687484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.687656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.687829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.687854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.688045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.688261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.688290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.688496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.688809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.688854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.689058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.689261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.689289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.689478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.689751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.689801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.690035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.690268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.690294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.690528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.690828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.690879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.691094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.691285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.691313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.691543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.691872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.691926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.692134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.692335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.692392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.692611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.692807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.692835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.693024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.693260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.693311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.693549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.693738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.693767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.694000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.694195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.694220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.694393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.694594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.694618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.694789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.694959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.694985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.499 qpair failed and we were unable to recover it. 00:27:36.499 [2024-05-15 07:04:50.695175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.499 [2024-05-15 07:04:50.695417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.695445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.695660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.695892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.695919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.696154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.696378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.696438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.696662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.696852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.696881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.697089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.697317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.697378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.697584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.697807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.697834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.698057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.698247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.698274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.698512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.698728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.698758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.698968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.699143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.699167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.699346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.699510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.699534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.699733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.699937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.699966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.700166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.700364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.700388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.700589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.700826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.700854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.701049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.701298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.701350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.701598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.701860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.701887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.500 [2024-05-15 07:04:50.702109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.702326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.500 [2024-05-15 07:04:50.702354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.500 qpair failed and we were unable to recover it. 00:27:36.769 [2024-05-15 07:04:50.702553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.702819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.702877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.703113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.703321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.703349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.703604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.703801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.703826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.704031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.704258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.704286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.704507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.704705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.704768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.705026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.705244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.705271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.705485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.705732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.705759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.705964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.706154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.706182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.706396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.706762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.706817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.707074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.707301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.707328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.707581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.707849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.707907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.708126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.708421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.708479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.708710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.708928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.708962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.709191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.709495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.709550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.709804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.710036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.710064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.710289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.710569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.710621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.710842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.711072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.711100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.711296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.711579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.711628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.711875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.712074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.712104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.712325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.712552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.712579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.712829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.713004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.713029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.713234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.713499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.713548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.713770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.713981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.714006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.714201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.714548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.714602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.714835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.715054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.715081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.715307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.715677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.715727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.715953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.716173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.716200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.716402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.716652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.716676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.770 qpair failed and we were unable to recover it. 00:27:36.770 [2024-05-15 07:04:50.716901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.717161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.770 [2024-05-15 07:04:50.717189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.717411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.717625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.717652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.717869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.718103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.718130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.718381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.718551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.718575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.718765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.718982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.719010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.719220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.719621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.719671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.719893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.720148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.720176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.720403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.720780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.720830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.721059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.721231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.721255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.721456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.721718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.721745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.721951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.722173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.722199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.722441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.722660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.722687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.722953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.723178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.723206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.723422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.723644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.723673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.723926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.724134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.724159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.724393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.724651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.724676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.724902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.725129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.725157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.725380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.725759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.725808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.726028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.726250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.726277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.726524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.726917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.726977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.727207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.727518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.727581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.727821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.728012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.728040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.728240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.728460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.728485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.728661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.728855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.728884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.729088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.729340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.729388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.729639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.729828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.729855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.730078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.730299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.730324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.730555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.730733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.730757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.730972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.731192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.731220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.731447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.731669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.731693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.771 qpair failed and we were unable to recover it. 00:27:36.771 [2024-05-15 07:04:50.731866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.771 [2024-05-15 07:04:50.732058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.732083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.732281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.732585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.732642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.732834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.733052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.733080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.733309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.733607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.733675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.733895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.734097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.734125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.734343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.734558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.734582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.734755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.734951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.734979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.735205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.735483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.735533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.735776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.736065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.736090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.736302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.736536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.736563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.736810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.737008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.737033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.737285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.737666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.737715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.737914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.738137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.738165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.738383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.738670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.738694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.738949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.739195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.739222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.739440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.739662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.739689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.739919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.740128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.740152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.740351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.740621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.740671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.740917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.741173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.741197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.741432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.741656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.741680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.741873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.742098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.742125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.742353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.742568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.742592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.742813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.743030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.743058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.743286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.743524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.743572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.743802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.744038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.744079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.744293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.744579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.744630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.744840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.745089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.745116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.745319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.745561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.745587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.745804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.746049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.746077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.772 qpair failed and we were unable to recover it. 00:27:36.772 [2024-05-15 07:04:50.746261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.772 [2024-05-15 07:04:50.746473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.746497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.746670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.746862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.746889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.747095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.747319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.747346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.747564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.747893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.747946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.748167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.748359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.748386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.748610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.748830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.748857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.749062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.749281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.749308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.749518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.749741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.749768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.750016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.750350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.750405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.750666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.750863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.750889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.751118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.751468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.751519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.751717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.751963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.751991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.752194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.752435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.752460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.752664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.752889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.752916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.753180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.753577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.753620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.753822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.754015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.754048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.754253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.754500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.754528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.754740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.754940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.754968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.755159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.755542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.755591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.755857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.756084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.756114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.756333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.756532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.756558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.756776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.757001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.757029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.757257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.757484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.757509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.757731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.757986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.758014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.758233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.758457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.758485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.758737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.758953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.758999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.759176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.759475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.759502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.759695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.759913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.759947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.760166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.760443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.760509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.773 [2024-05-15 07:04:50.760737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.760928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.773 [2024-05-15 07:04:50.760961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.773 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.761156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.761383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.761407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.761598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.761958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.762007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.762226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.762453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.762480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.762701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.762925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.762966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.763182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.763411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.763440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.763663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.763912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.763947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.764175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.764428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.764455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.764694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.764890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.764917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.765148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.765368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.765393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.765594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.765808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.765834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.766025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.766246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.766273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.766489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.766786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.766838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.767054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.767413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.767460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.767703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.767905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.767938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.768160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.768413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.768440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.768633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.768851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.768880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.769150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.769554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.769615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.769840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.770037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.770062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.770265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.770458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.770485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.770687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.770909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.770952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.771187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.771387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.771411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.771574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.771797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.771824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.772080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.772277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.772305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.772553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.772753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.774 [2024-05-15 07:04:50.772782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.774 qpair failed and we were unable to recover it. 00:27:36.774 [2024-05-15 07:04:50.773039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.773227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.773254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.773501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.773695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.773721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.773966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.774245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.774274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.774531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.774873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.774936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.775134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.775387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.775413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.775652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.775893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.775920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.776141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.776373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.776397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.776596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.776792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.776819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.777073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.777441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.777488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.777718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.777943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.777971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.778220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.778441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.778467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.778655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.778866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.778893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.779132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.779308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.779333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.779558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.779928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.779987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.780186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.780354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.780378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.780595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.781003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.781031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.781276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.781611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.781669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.781893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.782099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.782127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.782350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.782546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.782575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.782962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.783186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.783210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.783483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.783708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.783732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.784007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.784201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.784228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.784483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.784900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.784964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.785179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.785482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.785543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.785761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.785974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.786002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.786221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.786447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.786474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.786704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.786923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.786961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.787207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.787602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.787652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.787869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.788067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.788092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.775 qpair failed and we were unable to recover it. 00:27:36.775 [2024-05-15 07:04:50.788322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.775 [2024-05-15 07:04:50.788725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.788776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.789000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.789243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.789267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.789483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.789898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.789976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.790224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.790535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.790573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.790825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.791029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.791057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.791282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.791453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.791477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.791702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.791918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.791948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.792156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.792470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.792525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.792750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.792989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.793017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.793240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.793431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.793455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.793678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.793904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.793934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.794141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.794368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.794392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.794609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.794995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.795021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.795251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.795631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.795687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.795940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.796185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.796212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.796440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.796695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.796723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.796972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.797177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.797201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.797431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.797795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.797855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.798117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.798343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.798370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.798588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.799000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.799028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.799252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.799474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.799501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.799752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.799956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.799983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.800203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.800457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.800482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.800690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.800913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.800946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.801212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.801571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.801621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.801916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.802091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.802116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.802310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.802676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.802733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.802954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.803146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.803174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.803404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.803747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.803807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.804036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.804232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.776 [2024-05-15 07:04:50.804256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.776 qpair failed and we were unable to recover it. 00:27:36.776 [2024-05-15 07:04:50.804420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.804622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.804650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.804907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.805131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.805158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.805385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.805556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.805582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.805758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.805979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.806007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.806240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.806468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.806495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.806685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.806905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.806940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.807140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.807558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.807606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.807852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.808075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.808105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.808326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.808605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.808659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.808881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.809104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.809133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.809375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.809614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.809663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.809864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.810083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.810108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.810328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.810542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.810571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.810791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.811018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.811052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.811227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.811452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.811483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.811738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.811902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.811926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.812182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.812409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.812433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.812604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.812787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.812811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.813054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.813242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.813269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.813487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.813772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.813834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.814060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.814425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.814477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.814701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.814922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.814956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.815211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.815514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.815580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.815769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.816014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.816042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.816254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.816447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.816473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.816695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.816915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.816950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.817151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.817450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.817507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.817764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.817967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.817995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.818197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.818418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.818445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.777 qpair failed and we were unable to recover it. 00:27:36.777 [2024-05-15 07:04:50.818668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.818858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.777 [2024-05-15 07:04:50.818885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.819095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.819326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.819350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.819569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.819846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.819870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.820069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.820320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.820348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.820595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.820781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.820808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.821028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.821299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.821349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.821572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.821927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.821994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.822198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.822412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.822439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.822619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.822834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.822862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.823124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.823475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.823537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.823759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.823979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.824006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.824220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.824577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.824637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.824836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.825071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.825099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.825328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.825543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.825571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.825788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.825973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.826000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.826247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.826463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.826490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.826680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.826895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.826922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.827158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.827553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.827609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.827817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.828051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.828079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.828298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.828667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.828727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.829030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.829299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.829325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.829573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.829968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.829996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.830235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.830498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.830550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.830737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.830960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.830988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.831184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.831375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.831399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.831618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.831835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.831859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.832068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.832277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.832302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.832537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.832821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.778 [2024-05-15 07:04:50.832844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.778 qpair failed and we were unable to recover it. 00:27:36.778 [2024-05-15 07:04:50.833109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.833447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.833470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.833704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.833920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.833953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.834154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.834404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.834431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.834690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.834944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.834971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.835195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.835451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.835478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.835711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.835942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.835971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.836218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.836433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.836460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.836685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.836908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.836941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.837191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.837471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.837515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.837782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.838004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.838029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.838200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.838432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.838459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.838711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.838914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.838952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.839176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.839620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.839678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.839940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.840121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.840145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.840358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.840666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.840714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.840940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.841165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.841192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.841376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.841571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.841598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.841821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.842001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.842026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.842221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.842472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.842504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.842771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.842994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.843023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.843248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.843503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.843527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.843785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.844097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.844125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.844381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.844794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.844843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.845070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.845291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.845320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.845545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.845793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.845818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.846056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.846252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.846280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.846526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.846795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.779 [2024-05-15 07:04:50.846822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.779 qpair failed and we were unable to recover it. 00:27:36.779 [2024-05-15 07:04:50.847068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.847284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.847328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.847523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.847739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.847766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.847977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.848209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.848236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.848458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.848748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.848772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.848973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.849215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.849242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.849466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.849696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.849754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.849952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.850152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.850179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.850428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.850650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.850677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.850872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.851097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.851126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.851344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.851693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.851751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.851984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.852186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.852213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.852438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.852634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.852658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.852877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.853096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.853122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.853289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.853533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.853561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.853821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.854050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.854075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.854267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.854487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.854532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.854723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.854954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.854979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.855184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.855585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.855636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.855855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.856071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.856100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.856316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.856631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.856670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.856900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.857078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.857103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.857350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.857662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.857713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.857979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.858177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.858205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.858386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.858649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.858676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.858923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.859177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.859203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.859460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.859907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.859975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.860198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.860386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.860414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.860659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.860878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.860905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.780 [2024-05-15 07:04:50.861169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.861499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.780 [2024-05-15 07:04:50.861551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.780 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.861750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.861960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.861988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.862214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.862385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.862409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.862645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.862870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.862897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.863139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.863309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.863334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.863545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.863796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.863820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.864002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.864246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.864273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.864492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.864743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.864770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.865024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.865245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.865272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.865472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.865695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.865741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.865966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.866198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.866225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.866446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.866663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.866691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.866925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.867154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.867184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.867400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.867704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.867767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.868027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.868209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.868263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.868455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.868669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.868697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.868886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.869106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.869134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.869358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.869555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.869580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.869784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.870018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.870047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.870269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.870519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.870564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.870785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.871001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.871029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.871225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.871605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.871655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.871899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.872106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.872134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.872326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.872568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.872592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.872821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.873043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.873067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.873242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.873465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.873493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.873704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.873956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.873985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.874241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.874442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.874466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.874665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.874889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.874917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.875124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.875338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.875365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.781 qpair failed and we were unable to recover it. 00:27:36.781 [2024-05-15 07:04:50.875595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.875848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.781 [2024-05-15 07:04:50.875875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.876114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 628755 Killed "${NVMF_APP[@]}" "$@" 00:27:36.782 [2024-05-15 07:04:50.876290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.876314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.876548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 07:04:50 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:27:36.782 07:04:50 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:36.782 [2024-05-15 07:04:50.876770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.876799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 07:04:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:36.782 [2024-05-15 07:04:50.877033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 07:04:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:36.782 07:04:50 -- common/autotest_common.sh@10 -- # set +x 00:27:36.782 [2024-05-15 07:04:50.877205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.877247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.877439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.877667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.877734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.877993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.878188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.878231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.878453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.878649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.878673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.878954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.879190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.879217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.879435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.879658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.879683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.879900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.880109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.880137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.880395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.880612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.880637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 07:04:50 -- nvmf/common.sh@469 -- # nvmfpid=629336 00:27:36.782 [2024-05-15 07:04:50.880811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 07:04:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:36.782 07:04:50 -- nvmf/common.sh@470 -- # waitforlisten 629336 00:27:36.782 [2024-05-15 07:04:50.881027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.881054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 07:04:50 -- common/autotest_common.sh@819 -- # '[' -z 629336 ']' 00:27:36.782 [2024-05-15 07:04:50.881259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 07:04:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.782 07:04:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:36.782 [2024-05-15 07:04:50.881450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.881477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 07:04:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.782 07:04:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:36.782 [2024-05-15 07:04:50.881709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 07:04:50 -- common/autotest_common.sh@10 -- # set +x 00:27:36.782 [2024-05-15 07:04:50.881947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.881976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.882219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.882459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.882505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.882754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.882956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.882982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.883190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.883390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.883436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.883623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.883842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.883869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.884099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.884300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.884324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.884502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.884705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.884729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.884897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.885084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.885108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.885322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.885517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.885541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.885748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.885954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.885979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.886163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.886336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.886360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.886539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.886716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.886742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.886954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.887133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.887158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.782 qpair failed and we were unable to recover it. 00:27:36.782 [2024-05-15 07:04:50.887387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.782 [2024-05-15 07:04:50.887589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.887615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.887791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.887998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.888023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.888231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.888447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.888472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.888645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.888822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.888847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.889056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.889226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.889251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.889457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.889630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.889654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.889842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.890030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.890056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.890249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.890424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.890448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.890674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.890854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.890879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.891095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.891270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.891295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.891500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.891702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.891729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.891934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.892117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.892143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.892343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.892549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.892573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.892776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.892952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.892977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.893176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.893370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.893395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.893569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.893731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.893756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.893940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.894133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.894162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.894338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.894512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.894537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.894736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.894907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.894949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.895191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.895360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.895384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.895584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.895783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.895808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.895977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.896172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.896196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.896371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.896574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.896600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.896768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.896969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.896994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.897180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.897406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.783 [2024-05-15 07:04:50.897431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.783 qpair failed and we were unable to recover it. 00:27:36.783 [2024-05-15 07:04:50.897607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.897780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.897804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.898009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.898208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.898239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.898440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.898632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.898656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.898834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.899038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.899064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.899244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.899424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.899449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.899623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.899789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.899814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.900017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.900191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.900216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.900419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.900619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.900643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.900870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.901042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.901068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.901235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.901412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.901437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.901635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.901807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.901832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.902060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.902252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.902278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.902454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.902625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.902650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.902869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.903068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.903094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.903271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.903448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.903473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.903642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.903845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.903871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.904079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.904301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.904326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.904487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.904665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.904690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.904897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.905077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.905102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.905269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.905440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.905466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.905671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.905839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.905864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.906102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.906275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.906301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.906484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.906654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.906679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.906871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.907055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.907080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.907254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.907429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.907454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.907632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.907805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.907832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.908015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.908190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.908217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.908392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.908596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.908620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.908794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.909017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.909043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.784 qpair failed and we were unable to recover it. 00:27:36.784 [2024-05-15 07:04:50.909216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.909393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.784 [2024-05-15 07:04:50.909417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.909618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.909818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.909844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.910025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.910207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.910232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.910411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.910612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.910640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.910847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.911022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.911049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.911224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.911392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.911417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.911616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.911845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.911871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.912077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.912235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.912260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.912425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.912627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.912651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.912856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.913030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.913055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.913243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.913416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.913442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.913630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.913834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.913859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.914056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.914240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.914265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.914465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.914646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.914672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.914863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.915109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.915135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.915339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.915564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.915590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.915761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.915942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.915967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.916142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.916316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.916343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.916561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.916736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.916760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.916957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.917180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.917205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.917384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.917593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.917618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.917789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.917994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.918019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.918230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.918424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.918449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.918651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.918855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.918884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.919076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.919271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.919296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.919466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.919669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.919694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.919936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.920106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.920135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.920304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.920487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.920512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.922213] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:36.785 [2024-05-15 07:04:50.922286] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.785 [2024-05-15 07:04:50.924166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.924382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.924410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.785 qpair failed and we were unable to recover it. 00:27:36.785 [2024-05-15 07:04:50.924612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.785 [2024-05-15 07:04:50.924814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.924841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.925053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.925226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.925251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.925427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.925624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.925651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.925845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.926026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.926051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.926234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.926447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.926472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.926644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.926817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.926842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.927036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.927250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.927276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.927447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.927627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.927653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.927858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.928063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.928089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.928260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.928459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.928484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.928651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.928857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.928882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.929054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.929266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.929294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.929522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.929697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.929721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.929889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.930070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.930097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.930278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.930477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.930502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.930699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.930874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.930901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.931083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.931256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.931281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.931457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.931638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.931663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.931834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.932041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.932067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.932236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.932438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.932464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.932663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.932862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.932887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.933094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.933294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.933319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.933526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.933727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.933752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.933941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.934143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.934168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.934347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.934554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.934579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.934779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.934951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.934977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.935189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.935425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.935450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.935652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.935827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.935851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.936048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.936253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.936279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.936458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.936672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.786 [2024-05-15 07:04:50.936696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.786 qpair failed and we were unable to recover it. 00:27:36.786 [2024-05-15 07:04:50.936909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.937090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.937116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.937290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.937475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.937499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.937669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.937847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.937872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.938078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.938280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.938305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.938511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.938684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.938717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.938897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.939094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.939121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.939295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.939501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.939528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.939725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.939924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.939958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.940137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.940307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.940332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.940507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.940681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.940707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.940891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.941075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.941100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.941299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.941475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.941499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.941671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.941872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.941897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.942076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.942272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.942298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.942504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.942675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.942701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.942887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.943071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.943096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.943302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.943471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.943496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.943662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.943837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.943861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.944046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.944252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.944276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.944480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.944661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.944699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.944906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.945140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.945165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.945345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.945547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.945571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.945776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.945986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.946011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.946182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.946386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.946410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.946604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.946828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.946852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.947069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.947281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.947305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.947476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.947643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.947682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.947891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.948108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.948134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.787 qpair failed and we were unable to recover it. 00:27:36.787 [2024-05-15 07:04:50.948372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.948572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.787 [2024-05-15 07:04:50.948597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.948797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.948999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.949025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.949230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.949405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.949429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.949664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.949839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.949866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.950074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.950297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.950321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.950525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.950694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.950733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.950922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.951133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.951160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.951370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.951585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.951609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.951820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.951998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.952023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.952202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.952400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.952425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.952622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.952789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.952814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.953019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.953194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.953219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.953394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.953563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.953587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.953757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.953958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.953983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.954180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.954387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.954412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.954639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.954840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.954867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.955046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.955209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.955233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.955433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.955662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.955687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.955864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.956038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.956062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.956245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.956439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.956463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.956657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.956827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.956854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.957046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.957300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.957325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.957522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.957718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.957743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.957971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.958150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.958174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.958396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.958596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.958631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.958827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.958999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.959024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.959252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.959438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.959462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.788 qpair failed and we were unable to recover it. 00:27:36.788 [2024-05-15 07:04:50.959675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.788 [2024-05-15 07:04:50.959882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.959911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.960130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.960368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.960392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.960588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.960782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.960807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.960987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.961190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.961215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.961390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.961596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.961619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.961835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.962044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.962071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.962273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.962471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.962495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.962692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.962859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.962884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.963068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.963236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.963261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.963430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.963663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.963688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.963890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.964095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.964125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.964304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.964505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.964529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.964727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.964896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.964922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.965127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.965293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.965317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.965482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.965676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.965700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.965896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.966092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.966118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.966284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.966453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.966478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.966648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.966847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.966871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.967078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.967249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.967273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.967501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.967695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.967720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.967890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.968098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.968124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.968343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.968546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.968571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.968774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.789 [2024-05-15 07:04:50.969011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.969036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.969237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.969434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.969459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.969642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.969824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.969849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.970053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.970255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.970288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.970500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.970705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.970730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.970955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.971131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.971157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.971396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.971609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.971634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.789 [2024-05-15 07:04:50.971834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.972005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.789 [2024-05-15 07:04:50.972029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.789 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.972209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.972443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.972468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.972672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.972845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.972870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.973045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.973226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.973252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.973431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.973608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.973633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.973831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.974017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.974042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.974242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.974437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.974462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.974665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.974831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.974855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.975039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.975203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.975238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.975444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.975645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.975670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.975837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.976045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.976070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.976274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.976457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.976480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.976671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.976883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.976907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.977109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.977338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.977363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.977563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.977756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.977781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.977982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.978181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.978206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.978402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.978581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.978606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.978831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.979002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.979028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.979230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.979429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.979454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.979648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.979875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.979900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.980091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.980279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.980303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.980508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.980709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.980734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.980910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.981093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.981118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.981324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.981517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.981541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.981750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.981955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.981982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.982325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.982526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.982551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.982775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.982975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.983002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.983177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.983378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.983402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.983609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.983792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.983816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.984046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.984251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.790 [2024-05-15 07:04:50.984277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.790 qpair failed and we were unable to recover it. 00:27:36.790 [2024-05-15 07:04:50.984483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.984687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.984712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.984944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.985114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.985140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.985301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.985497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.985527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.985701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.985928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.985960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.986155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.986344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.986368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.986707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.986915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.986953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.987122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.987296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.987320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.987486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.987692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.987717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.987928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.988124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.988149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.988380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.988572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.988596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.988798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.989003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.989031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.989219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.989444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.989469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.989630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.989799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.989828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.990046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.990274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.990299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.990494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.990692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.990717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.990904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.991135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.991160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.991332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.991538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.991565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.991777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.991985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.992011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.992188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.992422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.992452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:36.791 qpair failed and we were unable to recover it. 00:27:36.791 [2024-05-15 07:04:50.992634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:36.791 [2024-05-15 07:04:50.992841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.992866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.062 qpair failed and we were unable to recover it. 00:27:37.062 [2024-05-15 07:04:50.993087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.993303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.993329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.062 qpair failed and we were unable to recover it. 00:27:37.062 [2024-05-15 07:04:50.993577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.993789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.993814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.062 qpair failed and we were unable to recover it. 00:27:37.062 [2024-05-15 07:04:50.994016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.994216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.994253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.062 qpair failed and we were unable to recover it. 00:27:37.062 [2024-05-15 07:04:50.994434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.994646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.994675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.062 qpair failed and we were unable to recover it. 00:27:37.062 [2024-05-15 07:04:50.994905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.995078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.995104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.062 qpair failed and we were unable to recover it. 00:27:37.062 [2024-05-15 07:04:50.995305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.995540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.995565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.062 qpair failed and we were unable to recover it. 00:27:37.062 [2024-05-15 07:04:50.995738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.995944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.995970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.062 qpair failed and we were unable to recover it. 00:27:37.062 [2024-05-15 07:04:50.996151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.996323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.996348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.062 qpair failed and we were unable to recover it. 00:27:37.062 [2024-05-15 07:04:50.996542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.996709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.996733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.062 qpair failed and we were unable to recover it. 00:27:37.062 [2024-05-15 07:04:50.996908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.997116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.062 [2024-05-15 07:04:50.997141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.062 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:50.997320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.997489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.997514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:50.997681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.997854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.997879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:50.998080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.998253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.998278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:50.998489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.998681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.998705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:50.998941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.999141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.999166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:50.999365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.999596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:50.999620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:50.999819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.000049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.000074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.000287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.000529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.000553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.000778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.000974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.001000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.001842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.002121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.002149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.002396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.002608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.002632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.002860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.003047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.003073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.003296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.003474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.003498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.003697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.003903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.003928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.004110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.004283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.004318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.004521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.004722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.004746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.004944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.005143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.005167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.005366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.005566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.005591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.005759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.005949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.005974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.006140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.006334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.006358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.006558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.006746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.006772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.007002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.007178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.007202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.007404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.007602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.007627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.007856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.008041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.008067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.008242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.008416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.008443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.008642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.008819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.008843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.009022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.009229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.009253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.009463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.009701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.063 [2024-05-15 07:04:51.009736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.009759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.010003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.010204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.010228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.063 qpair failed and we were unable to recover it. 00:27:37.063 [2024-05-15 07:04:51.010438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.063 [2024-05-15 07:04:51.010673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.010705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.010916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.011110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.011134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.011343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.011574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.011599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.011785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.011962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.011987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.012167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.012400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.012424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.012618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.012822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.012847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.013085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.013254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.013278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.013473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.013686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.013710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.013944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.014147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.014172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.014478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.014720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.014745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.014943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.015160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.015185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.015365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.015579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.015603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.015769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.015974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.015999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.016205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.016452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.016476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.016687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.016938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.016968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.017172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.017365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.017389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.017599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.017767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.017791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.017999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.018194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.018219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.018603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.018876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.018900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.019138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.019323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.019348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.019548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.019717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.019740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.019942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.020126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.020153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.020364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.020553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.020577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.020775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.020960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.020985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.021185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.021382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.021406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.021621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.021827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.021853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.022066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.022266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.022291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.022503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.022701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.022725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.023069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.023291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.023328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.023544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.023775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.064 [2024-05-15 07:04:51.023800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.064 qpair failed and we were unable to recover it. 00:27:37.064 [2024-05-15 07:04:51.024007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.024215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.024251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.024469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.024694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.024719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.024936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.025112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.025136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.025344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.025527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.025552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.025772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.025983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.026009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.026190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.026392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.026416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.026589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.026799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.026823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.027037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.027240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.027265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.027477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.027679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.027704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.027911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.028110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.028134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.028310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.028489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.028514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.028702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.028915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.028958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.029137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.029305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.029329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.029518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.029689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.029716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.029943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.030145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.030169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.030361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.030561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.030586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.030789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.030989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.031015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.031194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.031363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.031389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.031619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.031811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.031835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.032002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.032199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.032232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.032461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.032697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.032721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.032895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.033099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.033124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.033333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.033559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.033584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.033747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.033991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.034017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.034230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.034415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.034440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.034662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.034913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.034954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.035170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.035396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.035420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.035621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.035823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.035847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.036036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.036238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.036278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.065 qpair failed and we were unable to recover it. 00:27:37.065 [2024-05-15 07:04:51.036496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.065 [2024-05-15 07:04:51.036656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.036681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.036895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.037087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.037112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.037317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.037486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.037510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.037705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.038019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.038045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.038253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.038455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.038479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.038711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.038874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.038899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.039119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.039324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.039353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.039550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.039744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.039770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.040006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.040180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.040205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.040414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.040588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.040613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.040814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.041010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.041035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.041261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.041459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.041483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.041656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.041858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.041883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.042085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.042289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.042314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.042485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.042683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.042708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.042878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.043046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.043072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.043274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.043482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.043507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.043742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.043915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.043948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.044154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.044318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.044342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.044541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.044738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.044762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.044943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.045143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.045170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.045377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.045577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.045602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.045773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.045974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.046000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.046181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.046381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.046406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.046580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.046772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.046796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.047035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.047206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.047238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.047471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.047636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.047660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.047854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.048037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.048064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.048292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.048490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.048514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.066 qpair failed and we were unable to recover it. 00:27:37.066 [2024-05-15 07:04:51.048716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.048955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.066 [2024-05-15 07:04:51.048981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.049165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.049343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.049368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.049565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.049766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.049792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.049985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.050185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.050210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.050418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.050627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.050651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.050855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.051059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.051085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.051257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.051476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.051500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.051747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.051952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.051977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.052160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.052364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.052389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.052617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.052829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.052853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.053080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.053284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.053309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.053511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.053678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.053702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.053901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.054098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.054123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.054320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.054490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.054517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.054723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.054943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.054969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.055161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.055362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.055387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.055583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.055811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.055835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.056050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.056237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.056261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.056475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.056682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.056707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.056882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.057093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.057119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.057282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.057490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.057522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.057720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.057936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.057961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.058129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.058310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.058350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.058538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.058732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.058757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.058963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.059158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.059182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.067 qpair failed and we were unable to recover it. 00:27:37.067 [2024-05-15 07:04:51.059428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.067 [2024-05-15 07:04:51.059640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.059664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.059869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.060048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.060073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.060321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.060545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.060569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.060802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.060984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.061014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.061243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.061408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.061433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.061657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.061888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.061912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.062141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.062322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.062347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.062541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.062736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.062761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.062990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.063192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.063217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.063433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.063668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.063693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.063893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.064097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.064123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.064318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.064544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.064569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.064754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.064938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.064964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.065164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.065338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.065371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.065570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.065783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.065807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.066047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.066222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.066257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.066465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.066638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.066663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.066890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.067119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.067145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.067345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.067516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.067540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.067745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.067940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.067966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.068171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.068367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.068392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.068599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.068821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.068846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.069109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.069309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.069334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.069568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.069739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.069766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.069974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.070140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.070165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.070362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.070560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.070585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.070788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.070996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.071023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.071225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.071412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.071438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.071620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.071825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.071858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.072047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.072264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.068 [2024-05-15 07:04:51.072288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.068 qpair failed and we were unable to recover it. 00:27:37.068 [2024-05-15 07:04:51.072499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.072695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.072719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.072924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.073142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.073167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.073399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.073601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.073626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.073805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.074006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.074032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.074226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.074417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.074442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.074624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.074871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.074904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.075155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.075338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.075372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.075561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.075771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.075796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.076015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.076219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.076244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.076420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.076713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.076737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.077010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.077177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.077202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.077406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.077601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.077625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.077824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.078031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.078057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.078228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.078408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.078432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.078776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.079014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.079040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.079248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.079455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.079480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.079684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.079925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.079957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.080158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.080510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.080533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.080764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.080944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.080970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.081170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.081403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.081429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.081638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.081808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.081832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.082088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.082287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.082312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.082520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.082731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.082755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.082986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.083185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.083210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.083402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.083631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.083656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.083884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.084091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.084116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.084326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.084531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.084555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.084838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.085008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.085034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.085237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.085438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.085463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.069 qpair failed and we were unable to recover it. 00:27:37.069 [2024-05-15 07:04:51.085629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.085795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.069 [2024-05-15 07:04:51.085820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.086024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.086195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.086220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.086403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.086616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.086642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.086885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.087080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.087105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.087298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.087491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.087515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.087741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.087952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.087982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.088180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.088389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.088414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.088589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.088766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.088805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.089047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.089216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.089251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.089496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.089723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.089748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.089949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.090118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.090142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.090308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.090504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.090529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.090701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.090894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.090920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.091099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.091305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.091330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.091512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.091687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.091711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.091905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.092082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.092107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.092339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.092541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.092565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.092738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.092943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.092969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.093175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.093381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.093406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.093575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.093802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.093827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.094036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.094238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.094263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.094462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.094640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.094665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.094828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.095032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.095057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.095290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.095495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.095520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.095687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.095877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.095902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.096081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.096260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.096285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.096494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.096695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.096720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.096917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.097093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.097117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.097291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.097465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.097490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.097697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.097901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.070 [2024-05-15 07:04:51.097952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.070 qpair failed and we were unable to recover it. 00:27:37.070 [2024-05-15 07:04:51.098158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.098441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.098466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.098665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.098863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.098887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.099098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.099275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.099300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.099528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.099744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.099769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.099973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.100173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.100197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.100436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.100614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.100638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.100818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.101017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.101042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.101281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.101486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.101511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.101713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.101976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.102002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.102229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.102472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.102496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.102702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.102896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.102921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.103116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.103286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.103310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.103514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.103711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.103736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.103965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.104193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.104217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.104393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.104575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.104605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.104812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.105019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.105045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.105244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.105425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.105450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.105626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.105852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.105876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.106109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.106284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.106309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.106510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.106734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.106758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.106988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.107162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.107187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.107421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.107619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.107643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.107875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.108077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.108103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.108275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.108476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.108500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.108674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.108877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.071 [2024-05-15 07:04:51.108904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.071 qpair failed and we were unable to recover it. 00:27:37.071 [2024-05-15 07:04:51.109084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.109258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.109283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.109478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.109652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.109680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.109881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.110099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.110124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.110360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.110554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.110578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.110773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.110967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.110993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.111193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.111370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.111395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.111596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.111771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.111796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.111965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.112129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.112154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.112345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.112538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.112563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.112769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.112969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.112994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.113169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.113379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.113404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.113605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.113796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.113820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.113993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.114230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.114254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.114443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.114641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.114665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.114861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.115063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.115089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.115261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.115465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.115491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.115668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.115869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.115894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.116093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.116272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.116297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.116486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.116686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.116711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.116912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.117155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.117180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.117358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.117553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.117578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.117792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.118017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.118044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.118226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.118429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.118454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.118656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.118855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.118879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.119108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.119281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.119306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.119507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.119762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.119786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.120017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.120217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.120242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.120440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.120639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.120665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.120867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.121101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.121126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.072 [2024-05-15 07:04:51.121334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.121505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.072 [2024-05-15 07:04:51.121529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.072 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.121727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.121915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.121947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.122114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.122294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.122320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.122526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.122723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.122748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.122955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.123150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.123174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.123356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.123557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.123585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.123802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.124037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.124063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.124241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.124440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.124465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.124665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.124865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.124889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.125275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.125523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.125551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.125756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.125959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.125985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.126164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.126362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.126387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.126604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.126782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.126806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.126989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.127191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.127216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.127418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.127579] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:37.073 [2024-05-15 07:04:51.127590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.127614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.127707] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.073 [2024-05-15 07:04:51.127726] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.073 [2024-05-15 07:04:51.127739] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.073 [2024-05-15 07:04:51.127784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.127812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:37.073 [2024-05-15 07:04:51.127863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:37.073 [2024-05-15 07:04:51.127956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.127980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.127887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:27:37.073 [2024-05-15 07:04:51.127890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:37.073 [2024-05-15 07:04:51.128176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.128374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.128399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.128679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.129005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.129030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.129234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.129428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.129453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.129648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.129818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.129843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.130041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.130220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.130247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.130423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.130631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.130656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.130827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.131053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.131079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.131276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.131468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.131493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.131668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.131842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.131866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.132063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.132278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.132302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.132506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.132669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.132693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.132895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.133092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.073 [2024-05-15 07:04:51.133117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.073 qpair failed and we were unable to recover it. 00:27:37.073 [2024-05-15 07:04:51.133322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.133518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.133542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.133706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.133864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.133889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.134095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.134272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.134297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.134499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.134690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.134719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.134901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.135114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.135139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.135374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.135546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.135571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.135737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.135905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.135936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.136145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.136381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.136405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.136733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.136951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.136983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.137165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.137378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.137403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.137567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.137742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.137768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.137961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.138167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.138192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.138374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.138551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.138575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.138756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.138962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.138988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.139193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.139407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.139431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.139636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.139963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.139988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.140165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.140347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.140372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.140543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.140748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.140774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.140967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.141162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.141186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.141371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.141556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.141583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.141748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.141924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.141964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.142169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.142374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.142398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.142574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.142748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.142772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.142948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.143142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.143167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.143348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.143546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.143572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.143754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.143966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.144001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.144178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.144366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.144392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.144623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.144798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.144822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.144995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.145169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.145194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.074 qpair failed and we were unable to recover it. 00:27:37.074 [2024-05-15 07:04:51.145397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.145742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.074 [2024-05-15 07:04:51.145769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.145995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.146167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.146191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.146385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.146557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.146582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.146781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.146962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.146991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.147189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.147401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.147426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.147748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.147939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.147966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.148164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.148351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.148376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.148579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.148752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.148776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.148957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.149139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.149169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.149371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.149554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.149579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.149755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.149958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.149984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.150167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.150451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.150476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.150670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.150871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.150896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.151110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.151288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.151313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.151492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.151695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.151721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.151895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.152092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.152118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.152317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.152502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.152527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.152737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.152907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.152961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.153171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.153347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.153373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.153550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.153720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.153745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.153944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.154121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.154145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.154323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.154692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.154717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.154917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.155104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.155129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.155304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.155467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.155492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.155686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.155883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.155908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.156119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.156295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.156327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.156564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.156771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.156796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.156991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.157167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.157192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.157370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.157536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.157563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.157746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.157916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.075 [2024-05-15 07:04:51.157950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.075 qpair failed and we were unable to recover it. 00:27:37.075 [2024-05-15 07:04:51.158120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.158315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.158340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.158543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.158705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.158729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.158903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.159090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.159115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.159310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.159503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.159528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.159699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.159877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.159902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.160125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.160299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.160329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.160525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.160726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.160751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.160940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.161231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.161255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.161425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.161616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.161640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.161849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.162030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.162057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.162267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.162461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.162485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.162699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.162875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.162899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.163084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.163253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.163279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.163479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.163658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.163684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.163903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.164096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.164122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.164318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.164488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.164524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.164879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.165084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.165110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.165272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.165474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.165498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.165699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.165893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.165920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.166112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.166334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.166359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.166530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.166733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.166761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.166943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.167120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.167145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.167311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.167493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.167518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.167710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.167888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.167912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.076 qpair failed and we were unable to recover it. 00:27:37.076 [2024-05-15 07:04:51.168100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.076 [2024-05-15 07:04:51.168310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.168335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.168661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.168889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.168917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.169136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.169459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.169485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.169684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.169882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.169908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.170098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.170283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.170310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.170475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.170650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.170675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.170854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.171048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.171074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.171283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.171480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.171505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.171675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.171871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.171896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.172071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.172248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.172274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.172462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.172691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.172717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.172923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.173153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.173179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.173381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.173664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.173688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.173859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.174031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.174057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.174256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.174441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.174469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.174636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.174835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.174861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.175064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.175239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.175264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.175432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.175610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.175635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.175833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.175997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.176023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.176198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.176365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.176389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.176586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.176761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.176787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.176959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.177163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.177188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.177371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.177574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.177600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.177779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.177988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.178015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.178219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.178441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.178467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.178644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.178829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.178853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.179055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.179237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.179264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.179458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.179659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.179684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.179862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.180037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.180062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.077 qpair failed and we were unable to recover it. 00:27:37.077 [2024-05-15 07:04:51.180411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.077 [2024-05-15 07:04:51.180601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.180625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.180800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.181010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.181036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.181235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.181395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.181420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.181589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.181766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.181795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.182001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.182182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.182207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.182390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.182557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.182584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.182777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.182994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.183020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.183194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.183367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.183393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.183582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.183785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.183810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.184013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.184188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.184215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.184398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.184568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.184593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.184767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.184966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.184991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.185172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.185381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.185406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.185608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.185810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.185834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.186043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.186211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.186238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.186420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.186599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.186623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.186793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.186964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.186997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.187164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.187344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.187369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.187536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.187876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.187901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.188110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.188280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.188305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.188485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.188687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.188712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.189046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.189250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.189275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.189486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.189656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.189680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.189853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.190150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.190176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.190445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.190623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.190648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.190843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.191017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.191042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.191221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.191397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.191422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.191597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.191779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.191804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.192101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.192282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.192307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.192506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.192686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.078 [2024-05-15 07:04:51.192710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.078 qpair failed and we were unable to recover it. 00:27:37.078 [2024-05-15 07:04:51.192891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.193079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.193106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.193310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.193492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.193518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.193719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.193897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.193922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.194115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.194319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.194345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.194559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.194768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.194801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.195013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.195197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.195228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.195423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.195636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.195666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.195942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.196137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.196165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.196422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.196615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.196646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.196867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.197100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.197131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.197500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.197684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.197714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.197912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.198150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.198180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.198393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.198569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.198598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.198830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.199041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.199070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.199258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.199470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.199500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.199700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.199906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.199942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.200145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.200344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.200373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.200574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.200780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.200810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.201010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.201194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.201224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.201419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.201605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.201636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.201831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.202052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.202082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.202293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.202491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.202521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.202723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.202900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.202937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.203141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.203330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.203360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.203597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.203800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.203831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.204046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.204402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.204432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.204737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.204957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.204987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.205228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.205412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.205443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.205690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.205880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.205910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.079 qpair failed and we were unable to recover it. 00:27:37.079 [2024-05-15 07:04:51.206127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.079 [2024-05-15 07:04:51.206344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.206374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.206575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.206783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.206812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.207032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.207254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.207285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.207480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.207696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.207726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.207938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.208141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.208171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.208371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.208589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.208619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.208841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.209025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.209058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.209377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.209591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.209621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.209830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.210042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.210075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.210324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.210510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.210540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.210741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.210972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.211002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.211219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.211411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.211441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.211646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.211867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.211895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.212139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.212355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.212384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.212615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.212836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.212866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.213069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.213275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.213305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.213512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.213718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.213746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.213993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.214215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.214244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.214471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.214679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.214710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.214914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.215130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.215161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.215382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.215574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.215604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.215842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.216030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.216060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.216262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.216481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.216512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.216729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.216921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.216969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.080 qpair failed and we were unable to recover it. 00:27:37.080 [2024-05-15 07:04:51.217189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.217384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.080 [2024-05-15 07:04:51.217413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.217601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.217814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.217849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.218058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.218277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.218308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.218510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.218683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.218712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.218917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.219123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.219153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.219375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.219563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.219593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.219823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.220040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.220071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.220309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.220560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.220590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.220830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.221036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.221066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.221268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.221488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.221518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.221735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.221916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.221954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.222215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.222426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.222460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.222681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.222891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.222921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.223135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.223352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.223382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.223583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.223767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.223797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.223990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.224196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.224226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.224442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.224649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.224679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.224867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.225074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.225104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.225329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.225512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.225542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.225733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.225994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.226024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.226253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.226458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.226488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.226715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.226936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.226978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.227205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.227439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.227469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.227661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.227867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.227897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.228113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.228329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.228359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.228584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.228798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.228828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.229031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.229242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.229271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.229488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.229704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.229734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.229955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.230161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.230191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.230410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.230590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.230620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.081 qpair failed and we were unable to recover it. 00:27:37.081 [2024-05-15 07:04:51.230858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.081 [2024-05-15 07:04:51.231075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.231105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.231338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.231538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.231573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.231809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.231998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.232029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.232252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.232494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.232524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.232714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.232903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.232940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.233136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.233319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.233349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.233577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.233791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.233820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.234033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.234252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.234282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.234494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.234704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.234734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.234968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.235154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.235183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.235406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.235577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.235607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.235798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.236006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.236035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.236237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.236449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.236479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.236706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.236909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.236943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.237135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.237338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.237367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.237583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.237774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.237804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.238015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.238260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.238290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.238492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.238670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.238699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.238928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.239118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.239147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.239355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.239587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.239617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.239843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.240049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.240080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.240307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.240520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.240550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.240764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.241014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.241044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.241233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.241470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.241498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.241748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.241946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.241975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.242170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.242379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.242409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.242635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.242853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.242883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.243113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.243348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.243377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.243605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.243827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.243857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.082 [2024-05-15 07:04:51.244055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.244243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.082 [2024-05-15 07:04:51.244273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.082 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.244462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.244648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.244678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.244895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.245137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.245167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.245401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.245593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.245622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.245845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.246027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.246057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.246262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.246472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.246502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.246719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.246957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.246988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.247248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.247439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.247469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.247692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.247893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.247922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.248119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.248310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.248340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.248564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.248773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.248803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.249050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.249269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.249300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.249518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.249701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.249736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.250000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.250207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.250236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.250489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.250690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.250720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.250946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.251138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.251168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.251360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.251548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.251578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.251782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.251985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.252014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.252264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.252470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.252500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.252702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.252892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.252922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.253128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.253300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.253328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.253542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.253728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.253759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.253954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.254139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.254174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.254379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.254583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.254611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.254827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.255004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.255034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.255285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.255468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.255498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.255720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.255909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.255953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.256149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.256351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.256381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.256578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.256787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.256816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.257020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.257225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.257254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.083 qpair failed and we were unable to recover it. 00:27:37.083 [2024-05-15 07:04:51.257469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.083 [2024-05-15 07:04:51.257673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.257702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.257921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.258134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.258164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.258395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.258610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.258640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.258868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.259083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.259114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.259333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.259542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.259572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.259786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.260011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.260041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.260239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.260453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.260483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.260705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.260912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.260950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.261169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.261347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.261376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.261560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.261760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.261790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.262041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.262242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.262272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.262493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.262670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.262699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.262924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.263115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.263147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.263351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.263538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.263568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.263763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.263946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.263977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.264198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.264384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.264414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.264600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.264835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.264865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.265085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.265271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.265301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.265532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.265712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.265742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.265940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.266133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.266163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.266413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.266628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.266658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.266863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.267053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.267083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.267282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.267491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.267520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.267740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.267951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.267982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.268226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.268439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.268469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.268663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.268876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.268905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.269115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.269357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.269387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.269590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.269771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.269800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.270028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.270238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.270268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.084 [2024-05-15 07:04:51.270468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.270682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.084 [2024-05-15 07:04:51.270712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.084 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.270936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.271151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.271182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.271380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.271620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.271650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.271865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.272104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.272135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.272357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.272539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.272567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.272793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.273007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.273037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.273232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.273411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.273439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.273692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.273888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.273917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.274171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.274377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.274407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.274630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.274835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.274865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.275096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.275313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.275344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.275538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.275751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.275780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.275984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.276199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.276229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.276431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.276609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.276638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.276870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.277094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.277124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.277376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.277559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.277588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.277847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.278063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.278093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.278335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.278567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.278596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.278795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.279038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.279068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.279285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.279509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.279540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.279772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.279957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.279987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.280215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.280400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.280430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.280623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.280836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.280866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.281070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.281269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.281297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.281493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.281716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.281745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.281949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.282127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.085 [2024-05-15 07:04:51.282158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.085 qpair failed and we were unable to recover it. 00:27:37.085 [2024-05-15 07:04:51.282354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.086 [2024-05-15 07:04:51.282595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.086 [2024-05-15 07:04:51.282625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.086 qpair failed and we were unable to recover it. 00:27:37.086 [2024-05-15 07:04:51.282831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.086 [2024-05-15 07:04:51.283016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.086 [2024-05-15 07:04:51.283046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.086 qpair failed and we were unable to recover it. 00:27:37.086 [2024-05-15 07:04:51.283243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.086 [2024-05-15 07:04:51.283469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.086 [2024-05-15 07:04:51.283499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.086 qpair failed and we were unable to recover it. 00:27:37.086 [2024-05-15 07:04:51.283749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.086 [2024-05-15 07:04:51.283963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.086 [2024-05-15 07:04:51.283993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.086 qpair failed and we were unable to recover it. 00:27:37.086 [2024-05-15 07:04:51.284216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.086 [2024-05-15 07:04:51.284436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.086 [2024-05-15 07:04:51.284466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.086 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.284691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.284875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.284905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.285113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.285321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.285351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.285560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.285750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.285780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.285978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.286170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.286199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.286421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.286671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.286701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.286920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.287143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.287173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.287389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.287576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.287606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.287822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.288026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.288057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.288260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.288478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.288509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.288730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.288912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.288947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.289168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.289375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.289404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.289628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.289835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.289865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.290084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.290273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.290303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.290526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.290729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.290763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.290998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.291212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.291242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.291441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.291649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.291678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.291865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.292079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.292110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.292308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.292527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.292557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.292754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.292946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.292977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.293199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.293382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.293411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.293627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.293869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.293898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.294103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.294315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.294344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.294536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.294734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.294764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.294978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.295188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.295223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.295470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.295654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.295684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.295885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.296067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.296098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.296319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.296496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.296526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.296739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.296951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.357 [2024-05-15 07:04:51.296981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.357 qpair failed and we were unable to recover it. 00:27:37.357 [2024-05-15 07:04:51.297185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.297435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.297465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.297690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.297882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.297912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.298109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.298316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.298346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.298598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.298797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.298826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.299043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.299221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.299250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.299482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.299664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.299698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.299919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.300107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.300136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.300352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.300546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.300576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.300770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.300948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.300978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.301183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.301390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.301419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.301612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.301828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.301859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.302104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.302296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.302327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.302544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.302731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.302761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.302959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.303146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.303176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.303377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.303619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.303647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.303873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.304089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.304124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.304322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.304523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.304553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.304765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.304955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.304985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.305209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.305424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.305453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.305649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.305851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.305880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.306110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.306311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.306341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.306559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.306747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.306776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.306989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.307200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.307228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.307439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.307652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.307683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.307942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.308154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.308185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.308433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.308611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.308641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.308871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.309081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.309112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.309336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.309539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.309567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.309794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.310003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.310032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.358 qpair failed and we were unable to recover it. 00:27:37.358 [2024-05-15 07:04:51.310233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.358 [2024-05-15 07:04:51.310451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.310481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.310664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.310854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.310884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.311095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.311303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.311332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.311520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.311698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.311728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.311947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.312162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.312192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.312407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.312610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.312638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.312861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.313073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.313103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.313338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.313547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.313577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.313793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.313976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.314007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.314254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.314437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.314466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.314710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.314920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.314954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.315179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.315389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.315419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.315609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.315808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.315838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.316064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.316274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.316304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.316522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.316731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.316760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.316976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.317158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.317188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.317391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.317574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.317604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.317847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.318053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.318083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.318275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.318486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.318515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.318736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.318968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.318999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.319217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.319407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.319437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.319652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.319886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.319914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.320150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.320337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.320366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.320587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.320779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.320810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.321002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.321182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.321212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.321458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.321682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.321711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.321912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.322106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.322136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.322355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.322598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.322628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.322821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.323031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.323061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.323255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.323468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.359 [2024-05-15 07:04:51.323496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.359 qpair failed and we were unable to recover it. 00:27:37.359 [2024-05-15 07:04:51.323692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.323889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.323919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.324155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.324390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.324420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.324613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.324801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.324831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.325051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.325243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.325274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.325484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.325683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.325713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.325941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.326144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.326174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.326424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.326642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.326672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.326927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.327116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.327145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.327342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.327550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.327578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.327759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.327981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.328011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.328221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.328407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.328437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.328633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.328835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.328865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.329058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.329264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.329293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.329492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.329687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.329718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.329943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.330156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.330186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.330375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.330577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.330606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.330792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.331011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.331052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.331271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.331486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.331517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.331737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.331946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.331976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.332193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.332372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.332401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.332631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.332821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.332851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.333069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.333246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.333276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.333496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.333678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.333707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.333902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.334117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.334146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.334363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.334557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.334588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.334803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.335003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.335033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.335251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.335477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.335506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.335725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.335938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.335967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.336145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.336344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.360 [2024-05-15 07:04:51.336369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.360 qpair failed and we were unable to recover it. 00:27:37.360 [2024-05-15 07:04:51.336577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.336770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.336795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.336973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.337163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.337188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.337358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.337529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.337553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.337722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.337894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.337918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.338119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.338284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.338308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.338532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.338724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.338748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.338961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.339135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.339160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.339363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.339564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.339589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.339778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.339962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.339988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.340184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.340380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.340404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.340593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.340794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.340819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.341027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.341216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.341240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.341437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.341663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.341688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.341913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.342094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.342119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.342323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.342525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.342549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.342720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.342882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.342906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.343078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.343274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.343298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.343494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.343716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.343740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.343914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.344114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.344143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.344318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.344491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.344515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.344683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.344875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.344899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.345080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.345252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.345278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.345471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.345697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.345722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.361 qpair failed and we were unable to recover it. 00:27:37.361 [2024-05-15 07:04:51.345921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.346102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.361 [2024-05-15 07:04:51.346126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.346307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.346503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.346527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.346728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.346926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.346957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.347129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.347307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.347331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.347520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.347719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.347743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.347915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.348123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.348147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.348337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.348508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.348533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.348756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.348920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.348951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.349132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.349296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.349320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.349519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.349711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.349736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.349927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.350127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.350152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.350350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.350542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.350566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.350757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.350946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.350971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.351143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.351307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.351331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.351559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.351760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.351784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.352001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.352177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.352202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.352403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.352572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.352597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.352765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.352983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.353008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.353201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.353395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.353420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.353613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.353783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.353807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.353997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.354156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.354181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.354356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.354535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.354559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.354763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.354970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.354997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.355182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.355346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.355370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.355542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.355741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.355765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.355933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.356099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.356123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.356305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.356547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.356571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.356763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.356962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.356987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.357188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.357356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.357380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.357559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.357722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.357748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.362 qpair failed and we were unable to recover it. 00:27:37.362 [2024-05-15 07:04:51.357918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.362 [2024-05-15 07:04:51.358123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.358147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.358316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.358488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.358514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.358690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.358891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.358915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.359116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.359288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.359312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.359502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.359677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.359701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.359872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.360048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.360073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.360290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.360517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.360541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.360708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.360879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.360904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.361106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.361298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.361322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.361492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.361663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.361688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.361919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.362097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.362121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.362290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.362487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.362512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.362709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.362877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.362902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.363083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.363260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.363284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.363485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.363651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.363677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.363843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.364093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.364118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.364291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.364465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.364494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.364695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.364867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.364893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.365074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.365267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.365291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.365452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.365645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.365669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.365847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.366037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.366062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.366257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.366425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.366451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.366625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.366824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.366849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.367040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.367211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.367235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.367441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.367601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.367625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.367817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.368009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.368034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.368226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.368402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.368430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.368599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.368816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.368841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.369047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.369234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.369258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.369424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.369594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.363 [2024-05-15 07:04:51.369618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.363 qpair failed and we were unable to recover it. 00:27:37.363 [2024-05-15 07:04:51.369818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.369991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.370016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.370190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.370362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.370388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.370593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.370795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.370819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.370988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.371193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.371218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.371387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.371558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.371584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.371769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.371966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.371992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.372187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.372382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.372406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.372609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.372812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.372836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.373060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.373228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.373252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.373424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.373618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.373642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.373807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.374032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.374057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.374255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.374425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.374449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.374648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.374821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.374845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.375019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.375241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.375265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.375488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.375686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.375710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.375907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.376111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.376136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.376356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.376521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.376546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.376751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.376923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.376966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.377136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.377333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.377357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.377559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.377749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.377774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.377961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.378128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.378153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.378321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.378485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.378510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.378700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.378926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.378956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.379119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.379281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.379306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.379507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.379685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.379710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.379901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.380110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.380135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.380299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.380491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.380515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.380703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.380868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.380893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.381101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.381294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.381319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.381490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.381682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.364 [2024-05-15 07:04:51.381707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.364 qpair failed and we were unable to recover it. 00:27:37.364 [2024-05-15 07:04:51.381898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.382135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.382161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.382391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.382569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.382594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.382762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.382964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.382990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.383170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.383339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.383364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.383524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.383714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.383738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.383970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.384161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.384185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.384385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.384583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.384607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.384783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.384981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.385007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.385181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.385344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.385368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.385539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.385742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.385768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.385975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.386171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.386196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.386368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.386538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.386562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.386767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.386968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.386993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.387210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.387402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.387426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.387633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.387796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.387822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.387992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.388185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.388210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.388414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.388593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.388618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.388816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.388996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.389026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.389248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.389448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.389473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.389678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.389875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.389900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.390082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.390288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.390313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.390479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.390660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.390685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.390852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.391043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.391069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.391243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.391421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.391445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.391648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.391820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.391847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.392057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.392233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.392258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.365 qpair failed and we were unable to recover it. 00:27:37.365 [2024-05-15 07:04:51.392459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.392624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.365 [2024-05-15 07:04:51.392648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.392875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.393076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.393101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.393273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.393443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.393468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.393646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.393835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.393861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.394086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.394285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.394310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.394506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.394671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.394697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.394865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.395043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.395070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.395237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.395412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.395438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.395611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.395788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.395813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.396016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.396215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.396240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.396441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.396631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.396656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.396825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.397051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.397077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.397259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.397436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.397461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.397663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.397835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.397860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.398056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.398226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.398250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.398457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.398625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.398651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.398854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.399025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.399050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.399234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.399429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.399454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.399680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.399842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.399866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.400055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.400234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.400261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.400459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.400644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.400668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.400847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.401021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.401047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.401093] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248e4b0 (9): Bad file descriptor 00:27:37.366 [2024-05-15 07:04:51.401357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.401571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.401599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.401803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.401973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.402000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.402205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.402398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.402422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.402600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.402769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.402794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.402989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.403178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.403204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.403391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.403593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.403618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.366 qpair failed and we were unable to recover it. 00:27:37.366 [2024-05-15 07:04:51.403785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.366 [2024-05-15 07:04:51.403983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.404008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.404207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.404374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.404400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.404574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.404749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.404774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.404957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.405133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.405159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.405364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.405537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.405563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.405737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.405939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.405965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.406189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.406364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.406390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.406567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.406762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.406787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.406958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.407174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.407200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.407404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.407605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.407630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.407803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.408005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.408031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.408205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.408381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.408405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.408599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.408790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.408816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.408998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.409199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.409225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.409410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.409583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.409609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.409828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.410000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.410026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.410216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.410385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.410410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.410578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.410779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.410804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.410984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.411162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.411187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.411357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.411554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.411579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.411741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.411907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.411937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.412135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.412328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.412353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.412527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.412697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.412724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.412925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.413115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.413140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.413320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.413543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.413568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.413744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.413947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.413973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.414174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.414342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.414367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.414541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.414739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.414764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.414939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.415133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.415158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.415371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.415542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.367 [2024-05-15 07:04:51.415567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.367 qpair failed and we were unable to recover it. 00:27:37.367 [2024-05-15 07:04:51.415737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.415907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.415939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.416119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.416301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.416328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.416508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.416674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.416699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.416899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.417100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.417126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.417295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.417460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.417485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.417685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.417893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.417918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.418121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.418287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.418312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.418492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.418672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.418698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.418898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.419098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.419124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.419290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.419462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.419487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.419653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.419829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.419854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.420088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.420265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.420291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.420489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.420690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.420715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.420892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.421103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.421128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd180000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.421324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.421517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.421547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.421723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.421891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.421916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.422147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.422315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.422340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.422511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.422680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.422706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.422915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.423099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.423124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.423298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.423468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.423493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.423694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.423893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.423918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.424105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.424302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.424328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.424510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.424677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.424702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.424882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.425059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.425085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.425290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.425455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.425480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.425655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.425828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.425854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.426026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.426197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.426222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.426398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.426564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.426592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.426763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.426940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.426968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.368 qpair failed and we were unable to recover it. 00:27:37.368 [2024-05-15 07:04:51.427172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.368 [2024-05-15 07:04:51.427342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.427367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.427560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.427753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.427779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.427975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.428143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.428168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.428340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.428545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.428571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.428764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.428967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.428993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.429162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.429349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.429374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.429550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.429759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.429784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.429974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.430142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.430167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.430338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.430509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.430535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.430718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.430906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.430936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.431118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.431294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.431318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.431520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.431693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.431717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.431889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.432068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.432095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.432272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.432442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.432468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.432648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.432849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.432874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.433045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.433223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.433248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.433420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.433600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.433628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.433802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.433989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.434015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.434183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.434387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.434412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.434588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.434760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.434785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.434983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.435184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.435209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.435379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.435555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.435581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.435778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.435983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.436008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.436180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.436349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.436374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.436573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.436743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.436769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.436946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.437149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.437174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.437368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.437555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.437579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.437756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.437969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.437995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.438169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.438343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.438368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.438536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.438705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.369 [2024-05-15 07:04:51.438729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.369 qpair failed and we were unable to recover it. 00:27:37.369 [2024-05-15 07:04:51.438939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.439118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.439144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.439327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.439526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.439552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.439753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.439921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.439952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.440126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.440305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.440329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.440553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.440722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.440747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.440919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.441122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.441148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.441342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.441523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.441550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.441773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.441961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.441987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.442181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.442387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.442412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.442585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.442784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.442809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.442985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.443162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.443187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.443355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.443522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.443547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.443721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.443896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.443920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.444133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.444303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.444329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.444510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.444674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.444699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.444872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.445034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.445071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.445274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.445444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.445471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.445670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.445860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.445885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.446059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.446250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.446275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.446473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.446642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.446667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.446865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.447062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.447088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.447256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.447453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.447477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.447649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.447819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.447844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.448043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.448215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.448241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.370 qpair failed and we were unable to recover it. 00:27:37.370 [2024-05-15 07:04:51.448445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.370 [2024-05-15 07:04:51.448711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.448736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.448939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.449142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.449172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.449368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.449567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.449592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.449769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.449979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.450004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.450179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.450380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.450405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.450592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.450791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.450816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.450999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.451180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.451205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.451400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.451572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.451598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.451786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.451974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.452000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.452200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.452370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.452396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.452597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.452767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.452791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.452972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.453176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.453204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.453386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.453560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.453587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.453795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.453970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.453997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.454197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.454392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.454417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.454610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.454802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.454826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.455008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.455190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.455217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.455393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.455588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.455613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.455792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.455958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.455984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.456186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.456383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.456407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.456606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.456776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.456802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.456981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.457149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.457175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.457378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.457573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.457598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.457770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.457942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.457970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.458167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.458371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.458396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.458565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.458759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.458784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.458951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.459150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.459176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.459373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.459569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.459594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.459792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.459985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.460011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.371 qpair failed and we were unable to recover it. 00:27:37.371 [2024-05-15 07:04:51.460187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.460382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.371 [2024-05-15 07:04:51.460406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.460594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.460784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.460809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.460986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.461180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.461205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.461379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.461579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.461604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.461777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.461981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.462008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.462201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.462406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.462430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.462628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.462788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.462813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.463008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.463207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.463232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.463430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.463624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.463649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.463853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.464022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.464047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.464217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.464435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.464460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.464653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.464852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.464877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.465055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.465249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.465274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.465481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.465683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.465708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.465910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.466119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.466145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.466318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.466517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.466542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.466762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.466935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.466961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.467153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.467348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.467375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.467549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.467748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.467773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.467944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.468126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.468151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.468354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.468518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.468543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.468748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.468950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.468975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.469146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.469306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.469330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.469508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.469704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.469730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.469963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.470156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.470181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.470380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.470540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.470565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.470764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.470938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.470964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.471162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.471330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.471355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.471547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.471711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.471736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.471938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.472112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.472137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.372 qpair failed and we were unable to recover it. 00:27:37.372 [2024-05-15 07:04:51.472305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.372 [2024-05-15 07:04:51.472502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.472527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.472731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.472900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.472927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.473147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.473340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.473365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.473538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.473716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.473741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.473941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.474146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.474171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.474335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.474529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.474553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.474725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.474893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.474918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.475103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.475277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.475304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.475506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.475670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.475695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.475924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.476129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.476154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.476324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.476529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.476554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.476724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.476896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.476923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.477100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.477269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.477294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.477500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.477670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.477697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.477898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.478074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.478100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.478299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.478474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.478499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.478678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.478879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.478904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.479115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.479327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.479352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.479522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.479691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.479715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.479884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.480109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.480135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.480332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.480531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.480555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.480729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.480920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.480962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.481141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.481316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.481343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.481559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.481754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.481781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.481985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.482205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.482230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.482439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.482606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.482631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.482827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.483002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.483028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.483219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.483416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.483441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.483615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.483809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.483835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.373 [2024-05-15 07:04:51.484033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.484225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.373 [2024-05-15 07:04:51.484250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.373 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.484417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.484613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.484638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.484836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.485033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.485058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.485260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.485457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.485482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd188000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.485700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.485914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.485950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.486130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.486326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.486352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.486547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.486772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.486796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.486966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.487173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.487198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.487377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.487551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.487578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.487774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.487977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.488003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.488174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.488339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.488364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.488563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.488740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.488764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.488964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.489161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.489187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.489357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.489527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.489551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.489726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.489924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.489955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.490154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.490341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.490366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.490536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.490762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.490788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.490980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.491180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.491206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.491375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.491606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.491630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.491798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.491973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.491998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.492162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.492344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.492370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.492563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.492783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.492808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.492977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.493175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.493200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.493374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.493542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.493567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.493761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.493944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.493970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.374 [2024-05-15 07:04:51.494167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.494336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.374 [2024-05-15 07:04:51.494361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.374 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.494561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.494729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.494754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.494949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.495123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.495147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.495339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.495513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.495537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.495736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.495941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.495968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd190000b90 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.496157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.496393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.496420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.496622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.496800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.496825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.497026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.497196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.497221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.497391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.497559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.497585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.497772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.497960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.497986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.498188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.498387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.498411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.498576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.498766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.498790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.498983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.499155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.499180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.499369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.499562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.499586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.499778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.499975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.500000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.500180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.500340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.500364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.500555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.500756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.500783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.501017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.501187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.501212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.501412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.501636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.501661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.501830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.502023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.502054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.502253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.502444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.502468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.502662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.502825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.502849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.503028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.503226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.503251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.503460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.503625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.503649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.503844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.504019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.504045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.504267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.504443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.504467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.504668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.504837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.504861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.505046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.505218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.505242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.505434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.505618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.505643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.505841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.506044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.506070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.375 qpair failed and we were unable to recover it. 00:27:37.375 [2024-05-15 07:04:51.506243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.506435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.375 [2024-05-15 07:04:51.506460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.506652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.506827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.506851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.507021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.507194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.507219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.507412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.507573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.507598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.507807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.508015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.508041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.508210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.508375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.508399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.508603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.508790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.508814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.509012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.509187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.509212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.509408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.509584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.509610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.509812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.509984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.510010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.510191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.510365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.510389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.510550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.510739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.510763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.510962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.511159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.511184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.511354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.511545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.511569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.511737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.511933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.511958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.512159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.512333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.512358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.512533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.512730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.512754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.512916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.513132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.513157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.513325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.513524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.513549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.513717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.513908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.513939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.514140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.514371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.514395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.514562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.514753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.514777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.514969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.515147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.515172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.515365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.515560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.515584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.515755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.515980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.516005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.516173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.516336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.516361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.516556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.516725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.516749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.516950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.517143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.517167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.517362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.517548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.517572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.517757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.517917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.517946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.376 qpair failed and we were unable to recover it. 00:27:37.376 [2024-05-15 07:04:51.518116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.376 [2024-05-15 07:04:51.518318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.518344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.518513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.518680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.518705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.518894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.519073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.519098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.519320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.519487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.519511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.519682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.519851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.519875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.520051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.520246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.520270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.520471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.520640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.520664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.520833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.521023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.521048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.521242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.521421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.521446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.521628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.521787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.521811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.522002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.522179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.522208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.522375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.522597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.522622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.522798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.523000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.523025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.523247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.523438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.523463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.523633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.523828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.523852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.524043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.524216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.524243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.524417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.524616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.524640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.524802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.524972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.524997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.525195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.525396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.525420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.525618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.525794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.525818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.525995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.526173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.526197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.526400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.526596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.526620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.526807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.526980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.527005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.527173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.527351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.527375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.527549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.527743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.527768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.527938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.528115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.528141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.528345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.528537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.528562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.528730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.528898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.528922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.529162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.529324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.529348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.529514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.529710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.529734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.377 qpair failed and we were unable to recover it. 00:27:37.377 [2024-05-15 07:04:51.529966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.377 [2024-05-15 07:04:51.530139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.530164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.530331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.530525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.530550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.530721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.530941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.530967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.531155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.531353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.531378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.531575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.531772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.531796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.531992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.532175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.532199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.532378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.532582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.532606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.532774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.532943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.532969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.533148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.533343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.533367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.533562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.533761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.533785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.533988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.534180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.534204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.534426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.534600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.534624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.534851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.535022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.535047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.535218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.535386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.535412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.535618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.535814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.535838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.536004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.536204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.536229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.536398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.536569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.536594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.536764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.536927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.536967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.537140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.537311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.537335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.537504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.537742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.537766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.537941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.538118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.538143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.538340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.538536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.538560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.538748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.538923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.538953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.539179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.539345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.539369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.539542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.539749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.539773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.539974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.540176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.540201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.540376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.540543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.540568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.540750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.540917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.540946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.541142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.541323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.541347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.541548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.541719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.378 [2024-05-15 07:04:51.541743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.378 qpair failed and we were unable to recover it. 00:27:37.378 [2024-05-15 07:04:51.541915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.542090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.542115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.542316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.542516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.542545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.542722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.542887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.542911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.543090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.543264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.543288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.543483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.543647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.543671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.543842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.544065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.544090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.544263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.544464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.544488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.544679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.544879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.544903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.545116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.545306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.545330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.545503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.545695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.545720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.545883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.546063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.546088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.546256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.546418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.546446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.546640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.546833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.546858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.547051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.547244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.547269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.547470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.547644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.547668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.547859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.548025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.548050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.548216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.548387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.548411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.548638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.548796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.548821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.549026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.549207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.549233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.549402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.549596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.549621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.549820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.549990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.550015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.550203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.550389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.550413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.550622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.550813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.550837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.551041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.551237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.551262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.551464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.551641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.551665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.379 qpair failed and we were unable to recover it. 00:27:37.379 [2024-05-15 07:04:51.551864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.379 [2024-05-15 07:04:51.552050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.552076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.552242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.552433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.552458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.552617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.552779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.552803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.552995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.553199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.553223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.553402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.553591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.553615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.553821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.553993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.554019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.554186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.554379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.554404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.554584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.554774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.554799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.554966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.555147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.555174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.555346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.555543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.555567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.555790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.555956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.555981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.556186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.556384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.556408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.556582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.556755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.556779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.556979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.557158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.557182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.557383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.557552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.557576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.557773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.557998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.558023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.558222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.558386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.558410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.558583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.558756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.558780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.559000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.559190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.559215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.559413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.559576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.559600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.559770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.559945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.559971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.560162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.560330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.560355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.560527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.560718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.560742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.560954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.561147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.561172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.561348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.561525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.561549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.561749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.561938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.561963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.562140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.562361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.562386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.562553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.562780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.562804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.562981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.563177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.563202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.380 [2024-05-15 07:04:51.563371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.563570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.380 [2024-05-15 07:04:51.563595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.380 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.563817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.563995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.564020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.564194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.564360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.564384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.564582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.564760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.564785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.564960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.565134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.565158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.565329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.565525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.565550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.565724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.565894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.565918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.566145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.566321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.566345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.566514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.566682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.566710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.566893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.567094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.567119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.567323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.567522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.567547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.567720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.567904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.567935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.568164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.568353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.568377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.568555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.568776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.568801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.568977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.569148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.569173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.569337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.569536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.569560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.569751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.569942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.569966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.570146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.570332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.570356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.570576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.570798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.570822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.571000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.571170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.571195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.571387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.571610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.571635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.571813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.572047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.572073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.572260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.572453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.572477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.572673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.572837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.572862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.573057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.573233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.573258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.573453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.573621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.573645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.573811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.573980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.574005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.574200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.574390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.574416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.574580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.574779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.574804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.575018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.575209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.575234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.381 qpair failed and we were unable to recover it. 00:27:37.381 [2024-05-15 07:04:51.575434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.575595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.381 [2024-05-15 07:04:51.575620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.382 qpair failed and we were unable to recover it. 00:27:37.382 [2024-05-15 07:04:51.575788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.575967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.575992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.382 qpair failed and we were unable to recover it. 00:27:37.382 [2024-05-15 07:04:51.576193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.576358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.576383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.382 qpair failed and we were unable to recover it. 00:27:37.382 [2024-05-15 07:04:51.576550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.576720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.576746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.382 qpair failed and we were unable to recover it. 00:27:37.382 [2024-05-15 07:04:51.576953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.577119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.577144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.382 qpair failed and we were unable to recover it. 00:27:37.382 [2024-05-15 07:04:51.577313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.577475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.577500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.382 qpair failed and we were unable to recover it. 00:27:37.382 [2024-05-15 07:04:51.577699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.577898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.382 [2024-05-15 07:04:51.577925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.382 qpair failed and we were unable to recover it. 00:27:37.382 [2024-05-15 07:04:51.578105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.652 [2024-05-15 07:04:51.578325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.652 [2024-05-15 07:04:51.578350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.652 qpair failed and we were unable to recover it. 00:27:37.652 [2024-05-15 07:04:51.578528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.652 [2024-05-15 07:04:51.578730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.652 [2024-05-15 07:04:51.578755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.652 qpair failed and we were unable to recover it. 00:27:37.652 [2024-05-15 07:04:51.578962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.652 [2024-05-15 07:04:51.579159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.652 [2024-05-15 07:04:51.579184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.579386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.579565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.579594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.579774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.579948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.579975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.580155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.580336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.580363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.580532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.580754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.580779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.580966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.581132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.581156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.581328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.581520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.581545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.581741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.581902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.581927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.582156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.582324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.582348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.582543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.582720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.582745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.582944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.583153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.583177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.583346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.583551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.583577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.583757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.583961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.583986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.584182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.584347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.584371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.584546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.584746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.584770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.584956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.585179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.585204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.585367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.585532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.585556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.585784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.585954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.585980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.586184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.586378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.586405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.586580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.586785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.586810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.586982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.587154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.587183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.587377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.587595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.587619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.587808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.587981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.588006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.588229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.588429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.588453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.588618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.588843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.588868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.589049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.589227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.589251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.589453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.589618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.589642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.589845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.590014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.590040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.590200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.590404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.590428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.590588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.590770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.653 [2024-05-15 07:04:51.590799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.653 qpair failed and we were unable to recover it. 00:27:37.653 [2024-05-15 07:04:51.590977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.591177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.591201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.591406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.591604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.591628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.591828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.592028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.592053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.592247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.592411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.592436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.592658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.592852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.592876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.593111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.593341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.593366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.593539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.593711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.593735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.593939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.594107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.594131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.594305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.594478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.594502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.594722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.594921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.594950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.595143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.595310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.595335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.595513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.595674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.595698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.595894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.596069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.596094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.596285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.596481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.596505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.596672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.596873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.596897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.597071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.597264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.597288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.597484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.597651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.597676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.597873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.598075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.598100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.598292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.598466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.598492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.598682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.598847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.598871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.599050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.599242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.599267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.599458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.599620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.599645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.599869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.600056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.600081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.600251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.600426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.600452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.600623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.600794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.600818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.600988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.601178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.601202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.601422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.601610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.601635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.601836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.602007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.602032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.602255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.602419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.602443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.654 [2024-05-15 07:04:51.602608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.602813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.654 [2024-05-15 07:04:51.602837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.654 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.603006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.603168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.603192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.603366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.603565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.603589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.603755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.603956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.603981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.604144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.604368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.604392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.604557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.604730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.604756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.604958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.605122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.605146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.605307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.605505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.605530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.605755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.605952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.605978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.606170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.606330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.606354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.606526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.606699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.606723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.606887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.607088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.607113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.607287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.607486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.607515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.607713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.607886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.607912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.608096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.608267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.608293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.608500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.608672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.608696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.608878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.609057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.609086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.609280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.609499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.609524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.609692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.609854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.609878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.610057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.610247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.610271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.610448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.610641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.610665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.610865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.611037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.611063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.611228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.611452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.611483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.611690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.611860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.611885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.612076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.612252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.612277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.612445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.612616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.612640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.612811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.612976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.613002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.613164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.613342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.613366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.613560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.613782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.613806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.614005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.614176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.614200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.655 [2024-05-15 07:04:51.614367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.614563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.655 [2024-05-15 07:04:51.614587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.655 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.614760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.614940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.614966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.615186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.615350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.615375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.615579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.615750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.615774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.615946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.616112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.616137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.616312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.616482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.616507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.616675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.616863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.616887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.617073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.617247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.617271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.617467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.617641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.617665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.617828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.618007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.618033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.618234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.618443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.618469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.618636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.618835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.618861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.619066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.619231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.619256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.619430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.619624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.619649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.619822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.619998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.620023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.620202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.620372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.620397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.620591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.620757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.620782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.620959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.621125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.621150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.621340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.621536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.621560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.621740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.621904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.621936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.622138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.622326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.622350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.622538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.622739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.622764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.622996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.623203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.623227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.623391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.623560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.623585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.623748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.623942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.623967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.624131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.624298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.624323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.624494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.624673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.624699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.624942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.625105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.625130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.625333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.625497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.625521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.625719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.625940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.625965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.656 qpair failed and we were unable to recover it. 00:27:37.656 [2024-05-15 07:04:51.626164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.656 [2024-05-15 07:04:51.626333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.626358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.626576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.626745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.626770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.626969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.627142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.627167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.627331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.627511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.627535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.627706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.627869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.627893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.628072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.628270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.628294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.628496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.628722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.628746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.628916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.629112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.629137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.629358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.629527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.629551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.629747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.629950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.629975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.630185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.630374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.630398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.630618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.630817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.630841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.631018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.631239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.631265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.631459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.631624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.631654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.631864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.632030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.632056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.632236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.632402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.632426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.632624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.632793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.632818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.633013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.633212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.633237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.633408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.633604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.633629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.633852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.634020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.634045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.634219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.634409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.634434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.634631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.634803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.634828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.635003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.635170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.635195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.635371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.635549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.635573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.657 [2024-05-15 07:04:51.635772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.635962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.657 [2024-05-15 07:04:51.635987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.657 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.636161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.636336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.636360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.636535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.636696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.636721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.636882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.637084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.637109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.637282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.637456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.637484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.637656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.637851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.637877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.638062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.638229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.638254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.638426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.638646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.638671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.638845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.639049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.639075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.639256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.639418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.639443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.639624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.639805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.639829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.640047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.640215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.640240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.640439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.640635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.640660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.640826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.641031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.641056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.641224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.641395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.641420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.641595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.641793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.641818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.642024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.642195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.642223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.642392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.642621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.642646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.642838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.643022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.643056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.643240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.643437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.643462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.643660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.643859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.643884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.644060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.644228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.644252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.644433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.644599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.644626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.644830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.645028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.645053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.645230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.645430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.645454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.645659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.645851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.645876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.646076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.646239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.646265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.646468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.646673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.646699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.646864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.647035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.647061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.647234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.647406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.647431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.658 qpair failed and we were unable to recover it. 00:27:37.658 [2024-05-15 07:04:51.647631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.647808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.658 [2024-05-15 07:04:51.647834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.648040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.648204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.648230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.648405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.648607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.648633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.648812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.648983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.649009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.649185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.649355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.649381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.649566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.649770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.649794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.649994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.650190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.650215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.650389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.650561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.650585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.650781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.650961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.650989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.651167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.651361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.651386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.651554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.651724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.651753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.651928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.652138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.652165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.652358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.652551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.652577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.652745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.652916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.652945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.653108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.653281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.653305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.653528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.653703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.653728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.653935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.654106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.654130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.654327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.654524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.654548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.654747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.654916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.654946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.655125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.655297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.655321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.655493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.655663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.655688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.655872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.656037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.656063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.656240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.656437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.656462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.656659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.656829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.656855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.657060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.657230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.657256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.657435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.657606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.657631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.657804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.658005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.658030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.658203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.658382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.658408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.658614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.658778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.658803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.659003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.659230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.659257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.659 qpair failed and we were unable to recover it. 00:27:37.659 [2024-05-15 07:04:51.659430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.659 [2024-05-15 07:04:51.659630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.659655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.659834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.660040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.660066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.660245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.660437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.660462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.660631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.660855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.660880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.661082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.661256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.661281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.661473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.661672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.661696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.661889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.662087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.662113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.662291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.662463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.662488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.662660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.662853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.662878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.663072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.663245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.663272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.663467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.663658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.663684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.663889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.664092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.664117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.664322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.664494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.664518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.664710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.664903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.664936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.665114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.665292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.665317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.665510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.665678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.665702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.665876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.666070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.666096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.666261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.666426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.666451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.666612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.666779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.666804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.666975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.667146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.667171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.667349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.667513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.667539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.667718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.667916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.667946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.668120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.668307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.668332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.668521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.668724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.668750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.668940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.669107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.669131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.669302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.669479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.669506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.669680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.669879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.669904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.670086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.670264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.670290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.670464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.670630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.670655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.660 qpair failed and we were unable to recover it. 00:27:37.660 [2024-05-15 07:04:51.670822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.671052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.660 [2024-05-15 07:04:51.671078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.671246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.671434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.671458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.671631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.671837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.671867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.672098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.672278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.672302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.672464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.672642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.672667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.672841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.673019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.673046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.673219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.673417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.673441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.673615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.673785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.673810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.673983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.674147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.674172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.674379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.674552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.674577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.674746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.674958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.674984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.675164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.675340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.675365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.675567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.675764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.675789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.675974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.676146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.676170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.676375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.676545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.676572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.676743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.676914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.676946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.677120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.677293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.677318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.677524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.677700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.677724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.677922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.678097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.678123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.678292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.678473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.678497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.678669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.678827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.678852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.679026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.679226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.679250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.679421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.679598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.679624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.679803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.679977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.680002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.680201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.680366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.680392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.661 [2024-05-15 07:04:51.680611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.680806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.661 [2024-05-15 07:04:51.680830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.661 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.681013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.681211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.681236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.681449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.681647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.681671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.681902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.682109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.682134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.682319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.682507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.682535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.682699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.682877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.682902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.683075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.683273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.683298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.683479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.683676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.683701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.683883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.684053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.684079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.684287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.684458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.684482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.684686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.684885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.684910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.685117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.685319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.685344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.685513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.685678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.685703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.685925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.686129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.686154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.686322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.686495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.686524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.686750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.686941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.686966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.687140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.687306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.687332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.687532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.687766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.687791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.687964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.688172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.688197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.688362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.688552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.688578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.688764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.688965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.688991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.689165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.689364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.689389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.689595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.689811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.689836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.690016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.690241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.690266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.690463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.690655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.690680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.690852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.691038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.691063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.691239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.691417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.691443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.691617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.691819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.691845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.692020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.692194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.692226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.692419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.692589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.692615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.662 qpair failed and we were unable to recover it. 00:27:37.662 [2024-05-15 07:04:51.692819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.693019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.662 [2024-05-15 07:04:51.693044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.693250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.693422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.693449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.693618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.693814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.693838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.694044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.694218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.694245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.694419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.694587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.694612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.694805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.694982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.695008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.695173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.695365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.695390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.695592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.695793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.695818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.696013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.696189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.696221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.696388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.696559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.696584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.696753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.696923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.696959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.697162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.697338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.697363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.697540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.697717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.697742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.697955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.698156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.698181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.698361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.698591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.698615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.698790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.698968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.698994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.699168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.699343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.699370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.699592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.699764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.699789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.699969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.700165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.700189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.700396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.700594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.700618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.700810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.701011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.701039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.701212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.701417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.701442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.701642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.701807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.701834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.702048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.702216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.702241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.702433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.702597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.702622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.702800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.702995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.703020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.703218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.703402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.703429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.703596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.703801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.703827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.704032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.704205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.704230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.704409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.704578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.704603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.663 qpair failed and we were unable to recover it. 00:27:37.663 [2024-05-15 07:04:51.704798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.663 [2024-05-15 07:04:51.704981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.705009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.705179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.705356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.705381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.705623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.705796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.705820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.705994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.706174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.706200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.706410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.706582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.706606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.706805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.707003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.707028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.707197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.707365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.707390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.707597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.707833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.707858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.708077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.708306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.708330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.708497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.708699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.708724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.708890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.709061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.709087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.709314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.709508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.709533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.709736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.709939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.709964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.710160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.710360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.710384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.710589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.710802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.710827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.711044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.711242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.711267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.711465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.711664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.711688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.711870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.712079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.712106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.712275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.712437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.712462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.712654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.712857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.712882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.713062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.713246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.713272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.713444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.713620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.713647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.713809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.714011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.714037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.714203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.714382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.714408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.714604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.714787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.714812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.715021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.715194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.715219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.715406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.715600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.715624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.715798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.715996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.716025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.716201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.716372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.716397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.716587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.716791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.664 [2024-05-15 07:04:51.716822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.664 qpair failed and we were unable to recover it. 00:27:37.664 [2024-05-15 07:04:51.717000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.717176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.717201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.717397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.717568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.717595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.717762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.717953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.717979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.718154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.718330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.718355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.718519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.718686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.718711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.718908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.719091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.719116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.719283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.719483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.719507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.719699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.719891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.719919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.720100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.720275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.720299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.720466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.720634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.720659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.720840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.721080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.721106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.721301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.721476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.721503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.721698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.721874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.721899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.722102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.722279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.722304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.722562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.722754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.722778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.722952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.723127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.723152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.723321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.723491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.723515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.723687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.723847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.723872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.724051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.724223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.724247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.724474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.724692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.724716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.724919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.725097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.725121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.725329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.725494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.725518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.725686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.725849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.725874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.726065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.726260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.726285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.726448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.726639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.726664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.726836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.727005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.727030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.727256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.727419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.727446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.727618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.727821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.727847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.728024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.728199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.728223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.665 qpair failed and we were unable to recover it. 00:27:37.665 [2024-05-15 07:04:51.728385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.665 [2024-05-15 07:04:51.728577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.728602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.728775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.728973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.728999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.729196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.729383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.729407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.729583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.729803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.729827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.730025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.730185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.730210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.730380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.730580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.730604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.730791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.731010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.731035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.731238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.731407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.731432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.731656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.731826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.731851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.732022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.732186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.732210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.732404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.732573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.732599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.732830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.733025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.733050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.733270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.733490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.733514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.733712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.733883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.733909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.734147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.734327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.734351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.734523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.734720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.734745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.734945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.735170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.735195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.735373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.735565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.735590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.735796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.735970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.735995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.736188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.736380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.736405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.736581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.736782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.736806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.736972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.737170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.737199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.737399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.737569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.737594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.737783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.737992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.738017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.738191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.738390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.738415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.666 [2024-05-15 07:04:51.738609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.738802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.666 [2024-05-15 07:04:51.738827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.666 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.738998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.739217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.739241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.739429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.739639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.739664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.739833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.740009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.740034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.740206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.740393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.740417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.740582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.740804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.740829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.741032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.741196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.741221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.741415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.741577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.741601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.741792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.742005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.742031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.742230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.742417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.742441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.742605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.742775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.742799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.742977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.743151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.743177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.743376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.743578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.743602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.743776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.743943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.743969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.744189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.744364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.744389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.744559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.744754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.744778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.744986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.745153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.745177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.745346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.745539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.745563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.745765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.745940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.745964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.746165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.746342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.746366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.746531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.746705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.746730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.746955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.747123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.747148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.747352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.747554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.747579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.747764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.747935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.747960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.748157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.748356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.748380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.748550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.748739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.748763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.748933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.749107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.749131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.749328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.749501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.749525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.749725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.749914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.749949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.750144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.750325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.750349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.667 qpair failed and we were unable to recover it. 00:27:37.667 [2024-05-15 07:04:51.750540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.750730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.667 [2024-05-15 07:04:51.750755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.750953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.751116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.751141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.751306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.751469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.751493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.751663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.751860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.751887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.752070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.752239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.752263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.752460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.752653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.752677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.752852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.753043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.753068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.753262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.753488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.753512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.753679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.753872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.753896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.754111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.754279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.754304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.754496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.754660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.754685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.754877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.755077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.755103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.755289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.755467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.755491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.755683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.755851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.755875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.756094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.756279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.756303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.756498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.756665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.756689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.756865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.757033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.757058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.757251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.757427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.757457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.757633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.757800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.757827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.758022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.758217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.758242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.758406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.758566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.758591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.758779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.758972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.758997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.759172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.759363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.759388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.759581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.759784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.759808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.759982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.760181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.760205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.760380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.760572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.760596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.760790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.760957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.760982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.761159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.761382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.761410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.761646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.761867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.761891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.762098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.762275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.668 [2024-05-15 07:04:51.762299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.668 qpair failed and we were unable to recover it. 00:27:37.668 [2024-05-15 07:04:51.762489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.762681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.762705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.762897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.763126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.763151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.763343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.763569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.763594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.763756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.763974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.764000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.764185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.764351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.764376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.764596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.764790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.764815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.764986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.765171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.765198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.765394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.765590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.765614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.765787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.765992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.766018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.766220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.766445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.766469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.766665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.766857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.766882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.767055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.767224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.767249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.767413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.767612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.767637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.767803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.767999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.768024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.768191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.768351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.768375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.768574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.768772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.768797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.768967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.769140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.769165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.769331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.769497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.769522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.769691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.769890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.769915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.770123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.770295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.770320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.770514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.770681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.770705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.770871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.771045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.771070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.771294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.771490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.771517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.771686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.771881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.771906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.772126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.772322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.772346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.772536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.772734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.772758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.772938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.773103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.773127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.773317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.773484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.773510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.773714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.773883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.773908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.669 qpair failed and we were unable to recover it. 00:27:37.669 [2024-05-15 07:04:51.774086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.774275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.669 [2024-05-15 07:04:51.774299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.774496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.774688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.774713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.774881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.775056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.775082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.775253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.775438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.775464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.775687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.775880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.775905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.776108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.776277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.776302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.776504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.776670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.776695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.776893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.777101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.777126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.777327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.777522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.777546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.777765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.777965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.777990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.778163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.778330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.778355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.778555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.778723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.778748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.778940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.779169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.779194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.779391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.779580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.779604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.779800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.780002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.780027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.780215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.780417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.780442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.780664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.780851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.780876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.781047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.781256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.781281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.781479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.781648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.781673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.781866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.782069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.782099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.782269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.782441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.782465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.782640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.782812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.782837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.783035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.783227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.783252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.783416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.783608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.783632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.783836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.784060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.784085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.670 qpair failed and we were unable to recover it. 00:27:37.670 [2024-05-15 07:04:51.784253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.670 [2024-05-15 07:04:51.784447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.784471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.784642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.784830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.784855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.785053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.785223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.785250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.785458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.785654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.785679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.785847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.786016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.786041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.786272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.786436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.786460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.786629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.786824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.786848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.787071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.787262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.787287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.787522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.787691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.787715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.787889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.788069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.788096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.788299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.788492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.788517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.788686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.788844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.788869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.789075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.789251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.789277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.789449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.789641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.789665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.789864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.790061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.790086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.790290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.790510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.790534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.790699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.790891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.790915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.791120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.791295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.791319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.791514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.791713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.791738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.791903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.792095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.792120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.792293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.792486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.792511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.792681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.792851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.792876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.793053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.793265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.793290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.793519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.793729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.793764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.793954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.794151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.794177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.794375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.794540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.794565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.794760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.794951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.794976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.795137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.795329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.795353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.795549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.795741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.795766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-05-15 07:04:51.795948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.796140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.671 [2024-05-15 07:04:51.796165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.796360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.796532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.796557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.796746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.796944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.796969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.797169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.797339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.797364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.797532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.797699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.797726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.797919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.798098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.798123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.798286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.798492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.798516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.798683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.798884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.798908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.799114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.799305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.799330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.799519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.799735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.799760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.799922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.800104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.800128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.800302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.800498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.800522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.800715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.800887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.800913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.801151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.801346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.801371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.801581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.801754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.801779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.801953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.802115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.802140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.802300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.802463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.802492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.802683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.802870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.802895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.803089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.803286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.803311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.803472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.803638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.803662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.803827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.804023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.804049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.804228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.804395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.804422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.804589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.804790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.804815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.804989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.805159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.805184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.805347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.805545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.805569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.805758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.805936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.805963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.806189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.806376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.806400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.806576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.806774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.806799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.806984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.807161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.807185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.807383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.807561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.807585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.672 qpair failed and we were unable to recover it. 00:27:37.672 [2024-05-15 07:04:51.807778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.808002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.672 [2024-05-15 07:04:51.808027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.808200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.808391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.808415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.808602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.808766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.808791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.808965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.809179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.809204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.809365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.809587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.809612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.809808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.810003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.810028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.810209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.810401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.810425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.810600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.810797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.810821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.810997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.811163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.811187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.811358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.811580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.811605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.811775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.811943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.811968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.812140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.812334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.812358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.812548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.812739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.812763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.812943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.813118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.813143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.813314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.813479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.813503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.813675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.813864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.813888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.814093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.814270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.814295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.814488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.814687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.814711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.814910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.815098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.815123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.815319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.815513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.815537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.815707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.815868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.815893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.816074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.816245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.816269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.816460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.816626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.816652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.816822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.817049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.817075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.817249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.817416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.817440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.817638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.817827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.817852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.818026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.818204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.818229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.818396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.818586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.818610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.818830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.819005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.819030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.819206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.819408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.819433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.673 qpair failed and we were unable to recover it. 00:27:37.673 [2024-05-15 07:04:51.819629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.673 [2024-05-15 07:04:51.819821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.819846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.820015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.820214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.820239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.820434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.820625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.820650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.820839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.821028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.821053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.821253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.821442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.821467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.821692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.821886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.821910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.822108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.822270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.822294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.822457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.822633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.822661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.822829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.823007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.823032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.823197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.823401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.823425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.823615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.823781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.823808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.824007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.824186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.824211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.824372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.824559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.824583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.824769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.824995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.825020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.825211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.825396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.825421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.825590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.825773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.825797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.826027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.826248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.826273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.826465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.826659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.826687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.826885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.827086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.827111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.827286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.827474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.827498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.827675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.827843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.827868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.828097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.828294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.828319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.828485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.828645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.828670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.828837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.829030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.829055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.829251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.829470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.829495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.829695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.829887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.829912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.830095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.830321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.830345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.830509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.830705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.830732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.830905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.831078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.831104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.831305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.831492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.674 [2024-05-15 07:04:51.831517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.674 qpair failed and we were unable to recover it. 00:27:37.674 [2024-05-15 07:04:51.831688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.831882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.831907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.832086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.832257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.832282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.832471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.832634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.832659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.832851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.833012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.833038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.833261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.833454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.833478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.833697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.833857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.833881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.834076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.834248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.834273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.834440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.834643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.834667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.834841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.835035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.835060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.835236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.835462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.835486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.835662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.835834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.835859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.836026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.836197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.836221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.836441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.836614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.836638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.836853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.837053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.837078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.837258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.837445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.837469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.837671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.837893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.837918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.838114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.838311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.838336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.838508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.838708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.838733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.838905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.839118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.839152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.839353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.839546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.839571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.839734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.839938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.839963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.840160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.840325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.840349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.840583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.840748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.840773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.840974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.841152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.841177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.841366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.841562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.675 [2024-05-15 07:04:51.841587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.675 qpair failed and we were unable to recover it. 00:27:37.675 [2024-05-15 07:04:51.841765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.841933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.841959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.842139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.842331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.842356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.842554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.842750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.842774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.843000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.843200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.843229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.843392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.843591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.843615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.843837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.844035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.844061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.844235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.844405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.844429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.844608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.844809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.844834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.845038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.845226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.845250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.845444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.845614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.845638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.845844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.846029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.846054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.846251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.846415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.846440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.846636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.846811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.846835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.847008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.847202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.847231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.847425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.847583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.847608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.847779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.847975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.848000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.848196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.848399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.848426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.848624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.848820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.848845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.849012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.849182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.849207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.849396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.849592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.849616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.849837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.850056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.850082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.850275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.850464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.850489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.850679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.850875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.850899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.851111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.851277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.851301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.851470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.851659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.851684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.851861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.852035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.852061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.852231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.852426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.852450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.852646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.852835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.852859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.853037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.853209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.853234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.676 [2024-05-15 07:04:51.853432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.853626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.676 [2024-05-15 07:04:51.853650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.676 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.853847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.854051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.854075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.854281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.854471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.854495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.854658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.854850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.854874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.855055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.855251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.855277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.855468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.855658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.855682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.855852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.856054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.856079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.856277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.856445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.856470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.856671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.856833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.856857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.857028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.857223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.857248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.857417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.857586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.857610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.857777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.857956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.857983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.858156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.858356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.858383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.858577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.858775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.858799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.858970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.859151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.859177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.859375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.859573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.859597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.859819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.860014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.860040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.860216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.860414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.860438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.860627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.860792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.860818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.860997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.861187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.861211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.861380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.861571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.861595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.861793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.861975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.862000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.862220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.862388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.862414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.862603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.862768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.862792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.862958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.863129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.863154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.863346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.863526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.863551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.863711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.863874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.863899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.864091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.864312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.864336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.864509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.864702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.864727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.864927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.865109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.865133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.677 qpair failed and we were unable to recover it. 00:27:37.677 [2024-05-15 07:04:51.865304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.865497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.677 [2024-05-15 07:04:51.865521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.865729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.865901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.865927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.866135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.866333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.866357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.866537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.866722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.866746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.866918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.867127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.867151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.867321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.867523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.867552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.867781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.868002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.868027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.868218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.868383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.868408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.868614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.868805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.868830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.868999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.869198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.869222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.869393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.869558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.869582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.869746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.869906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.869935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.870132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.870350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.870375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.870570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.870789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.870814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.870990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.871166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.871191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.871392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.871549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.871573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.871766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.871957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.871983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.872187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.872362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.872387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.872592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.872882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.872906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 [2024-05-15 07:04:51.873092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.873297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 [2024-05-15 07:04:51.873324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.678 07:04:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:37.678 qpair failed and we were unable to recover it. 00:27:37.678 07:04:51 -- common/autotest_common.sh@852 -- # return 0 00:27:37.678 [2024-05-15 07:04:51.873535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.678 07:04:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:37.940 [2024-05-15 07:04:51.873706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.873731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 07:04:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:37.940 [2024-05-15 07:04:51.873970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:27:37.940 [2024-05-15 07:04:51.874138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.874163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.874334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.874521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.874547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.874727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.874928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.874959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.875157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.875325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.875350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.875556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.875760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.875786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.875965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.876163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.876190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.876389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.876585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.876610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.876832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.877003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.877028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.877194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.877391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.877416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.877619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.877823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.877850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.878050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.878242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.878267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.878457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.878650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.940 [2024-05-15 07:04:51.878674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.940 qpair failed and we were unable to recover it. 00:27:37.940 [2024-05-15 07:04:51.878871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.879051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.879076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.879243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.879428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.879452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.879649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.879844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.879869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.880063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.880233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.880258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.880486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.880662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.880686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.880878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.881084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.881109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.881309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.881474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.881499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.881668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.881858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.881882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.882080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.882250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.882274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.882447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.882656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.882681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.882881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.883064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.883089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.883269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.883436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.883463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.883662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.883826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.883856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.884033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.884225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.884249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.884452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.884644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.884669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.884840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.885038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.885064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.885228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.885471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.885496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.885675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.885853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.885878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.886058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.886239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.886264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.886439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.886606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.886629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.886810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.887008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.887032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.887232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.887425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.887450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.887644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.887830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.887855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.888051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.888229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.888254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.888430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.888616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.888641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.888832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.889041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.889066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.889241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.889407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.889432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.889634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.889810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.889834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.890027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.890210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.890235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.941 [2024-05-15 07:04:51.890439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.890640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.941 [2024-05-15 07:04:51.890665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.941 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.890836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 07:04:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.942 [2024-05-15 07:04:51.891029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.891056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 07:04:51 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:37.942 [2024-05-15 07:04:51.891222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 07:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.942 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:27:37.942 [2024-05-15 07:04:51.891421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.891458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.891659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.891823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.891847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.892043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.892233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.892260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.892447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.892611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.892637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.892834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.892998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.893023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.893204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.893375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.893402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.893591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.893773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.893797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.893966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.894147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.894172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.894347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.894546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.894570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.894741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.894939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.894964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.895135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.895429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.895454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.895619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.895813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.895838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.896015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.896192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.896217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.896421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.896600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.896627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.896861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.897060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.897086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.897267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.897450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.897474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.897644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.897809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.897834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.898040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.898248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.898274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.898449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.898776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.898800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.898977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.899150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.899177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.899366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.899567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.899593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.899760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.899964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.899994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.900170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.900352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.900377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.900578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.900775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.900800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.900975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.901148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.901174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.901377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.901549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.901573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.901782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.901952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.901978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.942 qpair failed and we were unable to recover it. 00:27:37.942 [2024-05-15 07:04:51.902146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.942 [2024-05-15 07:04:51.902320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.902344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.902526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.902693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.902717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.903035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.903243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.903270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.903474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.903664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.903689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.903863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.904049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.904075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.904258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.904437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.904463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.904677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.904843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.904867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.905043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.905242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.905267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.905458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.905660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.905684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.905892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.906075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.906100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.906269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.906562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.906587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.906786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.906960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.906987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.907173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.907350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.907374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.907582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.907757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.907782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.907946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.908125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.908149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.908354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.908532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.908557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.908758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.908939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.908964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.909169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.909351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.909376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.909693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.909878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.909905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.910094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.910265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.910290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.910491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.910716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.910740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.910913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.911097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.911123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.911293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.911490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.911514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.911684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.911936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.911961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.912156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.912379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.912406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.912608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.912806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.912831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.913009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.913244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.913269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.913477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 Malloc0 00:27:37.943 [2024-05-15 07:04:51.913672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 [2024-05-15 07:04:51.913697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 [2024-05-15 07:04:51.913883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 07:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.943 [2024-05-15 07:04:51.914060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 07:04:51 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:37.943 [2024-05-15 07:04:51.914085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.943 qpair failed and we were unable to recover it. 00:27:37.943 07:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.943 [2024-05-15 07:04:51.914293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.943 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:27:37.944 [2024-05-15 07:04:51.914495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.914522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.914702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.914910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.914942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.915145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.915329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.915354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.915532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.915721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.915746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.915940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.916121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.916147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.916370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.916541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.916573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.916785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.916983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.917011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.917184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.917254] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.944 [2024-05-15 07:04:51.917352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.917377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.917544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.917706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.917730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.917934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.918109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.918134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.918338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.918538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.918563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.918740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.918946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.918970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.919191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.919461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.919486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.919655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.919853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.919877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.920089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.920259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.920284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.920462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.920693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.920723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.920921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.921119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.921144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.921372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.921560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.921585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.921762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.921951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.921977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.922151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.922332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.922359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.922531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.922701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.922728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.922912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.923089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.923115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.923295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.923503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.923528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.923726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.923897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.923925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.924135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.924310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.924336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.924543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.924715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.924739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.944 qpair failed and we were unable to recover it. 00:27:37.944 [2024-05-15 07:04:51.924911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.925094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.944 [2024-05-15 07:04:51.925119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.925313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 07:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.945 [2024-05-15 07:04:51.925494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.925520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 07:04:51 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:37.945 [2024-05-15 07:04:51.925729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 07:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.945 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:27:37.945 [2024-05-15 07:04:51.925927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.925959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.926137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.926335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.926360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.926541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.926753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.926778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.926958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.927150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.927174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.927353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.927561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.927587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.927753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.927925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.927954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.928121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.928286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.928311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.928483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.928660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.928684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.928856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.929039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.929067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.929264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.929437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.929462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.929637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.929827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.929852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.930062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.930238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.930263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.930439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.930614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.930640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.930866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.931052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.931078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.931252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.931448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.931473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.931642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.931807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.931832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.932000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.932182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.932208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.932387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.932566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.932591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.932782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.932980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.933005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.933205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.933379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.933405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 07:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.945 [2024-05-15 07:04:51.933575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 07:04:51 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:37.945 [2024-05-15 07:04:51.933779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 07:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.945 [2024-05-15 07:04:51.933804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:27:37.945 [2024-05-15 07:04:51.933983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.934156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.934181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.934362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.934541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.934566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.934768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.934940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.945 [2024-05-15 07:04:51.934965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.945 qpair failed and we were unable to recover it. 00:27:37.945 [2024-05-15 07:04:51.935138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.935307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.935332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.935537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.935706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.935730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.935936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.936133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.936162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.936337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.936509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.936534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.936710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.936879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.936903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.937158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.937351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.937376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.937555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.937724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.937749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.937924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.938117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.938142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.938304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.938474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.938499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.938670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.938847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.938873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.939076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.939240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.939265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.939465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.939638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.939663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.939837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.940035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.940062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.940240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.940403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.940427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.940600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.940789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.940814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.941006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.941205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.941232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.941410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 07:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.946 [2024-05-15 07:04:51.941616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.941640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 07:04:51 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.946 [2024-05-15 07:04:51.941807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 07:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.946 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:27:37.946 [2024-05-15 07:04:51.941972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.941997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.942167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.942375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.942400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.942628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.942802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.942827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.943041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.943227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.943252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.943446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.943644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.943669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.943866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.944066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.944093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.944307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.944564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.944590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.944770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.944938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.944964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.945162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.945374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.946 [2024-05-15 07:04:51.945399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24809f0 with addr=10.0.0.2, port=4420 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 [2024-05-15 07:04:51.945502] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.946 [2024-05-15 07:04:51.948030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.946 [2024-05-15 07:04:51.948239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.946 [2024-05-15 07:04:51.948267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.946 [2024-05-15 07:04:51.948282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.946 [2024-05-15 07:04:51.948294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.946 [2024-05-15 07:04:51.948326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.946 qpair failed and we were unable to recover it. 00:27:37.946 07:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.946 07:04:51 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:37.947 07:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.947 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:27:37.947 07:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.947 07:04:51 -- host/target_disconnect.sh@58 -- # wait 628914 00:27:37.947 [2024-05-15 07:04:51.957923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:51.958117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:51.958144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:51.958159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:51.958171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:51.958199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:51.967943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:51.968124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:51.968156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:51.968171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:51.968183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:51.968211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:51.977900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:51.978131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:51.978157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:51.978172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:51.978184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:51.978211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:51.987898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:51.988084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:51.988109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:51.988124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:51.988135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:51.988162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:51.997940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:51.998112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:51.998138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:51.998152] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:51.998164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:51.998192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:52.007927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:52.008117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:52.008144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:52.008158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:52.008171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:52.008203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:52.018023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:52.018224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:52.018250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:52.018264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:52.018277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:52.018304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:52.028010] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:52.028182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:52.028208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:52.028230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:52.028242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:52.028270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:52.038040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:52.038213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:52.038237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:52.038252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:52.038263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:52.038290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:52.048067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:52.048251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:52.048277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:52.048292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:52.048303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:52.048330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:52.058078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:52.058258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:52.058288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:52.058303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:52.058315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:52.058343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:52.068143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:52.068358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:52.068384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:52.068399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:52.068411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:52.068438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:52.078141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:52.078317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:52.078342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:52.078357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.947 [2024-05-15 07:04:52.078369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.947 [2024-05-15 07:04:52.078396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.947 qpair failed and we were unable to recover it. 00:27:37.947 [2024-05-15 07:04:52.088172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.947 [2024-05-15 07:04:52.088366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.947 [2024-05-15 07:04:52.088393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.947 [2024-05-15 07:04:52.088407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.948 [2024-05-15 07:04:52.088419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.948 [2024-05-15 07:04:52.088446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.948 qpair failed and we were unable to recover it. 00:27:37.948 [2024-05-15 07:04:52.098191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.948 [2024-05-15 07:04:52.098402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.948 [2024-05-15 07:04:52.098427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.948 [2024-05-15 07:04:52.098441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.948 [2024-05-15 07:04:52.098454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.948 [2024-05-15 07:04:52.098487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.948 qpair failed and we were unable to recover it. 00:27:37.948 [2024-05-15 07:04:52.108279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.948 [2024-05-15 07:04:52.108459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.948 [2024-05-15 07:04:52.108485] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.948 [2024-05-15 07:04:52.108500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.948 [2024-05-15 07:04:52.108512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.948 [2024-05-15 07:04:52.108539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.948 qpair failed and we were unable to recover it. 00:27:37.948 [2024-05-15 07:04:52.118263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.948 [2024-05-15 07:04:52.118458] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.948 [2024-05-15 07:04:52.118486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.948 [2024-05-15 07:04:52.118505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.948 [2024-05-15 07:04:52.118516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.948 [2024-05-15 07:04:52.118545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.948 qpair failed and we were unable to recover it. 00:27:37.948 [2024-05-15 07:04:52.128311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.948 [2024-05-15 07:04:52.128482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.948 [2024-05-15 07:04:52.128508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.948 [2024-05-15 07:04:52.128522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.948 [2024-05-15 07:04:52.128534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.948 [2024-05-15 07:04:52.128561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.948 qpair failed and we were unable to recover it. 00:27:37.948 [2024-05-15 07:04:52.138312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.948 [2024-05-15 07:04:52.138491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.948 [2024-05-15 07:04:52.138516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.948 [2024-05-15 07:04:52.138531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.948 [2024-05-15 07:04:52.138543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.948 [2024-05-15 07:04:52.138570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.948 qpair failed and we were unable to recover it. 00:27:37.948 [2024-05-15 07:04:52.148373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.948 [2024-05-15 07:04:52.148554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.948 [2024-05-15 07:04:52.148586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.948 [2024-05-15 07:04:52.148601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.948 [2024-05-15 07:04:52.148613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.948 [2024-05-15 07:04:52.148640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.948 qpair failed and we were unable to recover it. 00:27:37.948 [2024-05-15 07:04:52.158557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.948 [2024-05-15 07:04:52.158786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.948 [2024-05-15 07:04:52.158811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.948 [2024-05-15 07:04:52.158826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.948 [2024-05-15 07:04:52.158838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.948 [2024-05-15 07:04:52.158865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.948 qpair failed and we were unable to recover it. 00:27:37.948 [2024-05-15 07:04:52.168477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.948 [2024-05-15 07:04:52.168669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.948 [2024-05-15 07:04:52.168708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.948 [2024-05-15 07:04:52.168724] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.948 [2024-05-15 07:04:52.168736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:37.948 [2024-05-15 07:04:52.168763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:37.948 qpair failed and we were unable to recover it. 00:27:38.207 [2024-05-15 07:04:52.178484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.207 [2024-05-15 07:04:52.178672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.207 [2024-05-15 07:04:52.178698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.207 [2024-05-15 07:04:52.178713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.207 [2024-05-15 07:04:52.178725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.207 [2024-05-15 07:04:52.178753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.207 qpair failed and we were unable to recover it. 00:27:38.207 [2024-05-15 07:04:52.188544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.207 [2024-05-15 07:04:52.188751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.207 [2024-05-15 07:04:52.188777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.207 [2024-05-15 07:04:52.188791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.207 [2024-05-15 07:04:52.188803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.207 [2024-05-15 07:04:52.188838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.207 qpair failed and we were unable to recover it. 00:27:38.207 [2024-05-15 07:04:52.198547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.207 [2024-05-15 07:04:52.198726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.207 [2024-05-15 07:04:52.198752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.207 [2024-05-15 07:04:52.198766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.207 [2024-05-15 07:04:52.198781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.207 [2024-05-15 07:04:52.198808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.207 qpair failed and we were unable to recover it. 00:27:38.207 [2024-05-15 07:04:52.208550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.207 [2024-05-15 07:04:52.208725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.207 [2024-05-15 07:04:52.208752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.207 [2024-05-15 07:04:52.208767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.207 [2024-05-15 07:04:52.208780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.207 [2024-05-15 07:04:52.208807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.207 qpair failed and we were unable to recover it. 00:27:38.207 [2024-05-15 07:04:52.218536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.207 [2024-05-15 07:04:52.218714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.207 [2024-05-15 07:04:52.218740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.207 [2024-05-15 07:04:52.218755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.207 [2024-05-15 07:04:52.218767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.207 [2024-05-15 07:04:52.218794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.207 qpair failed and we were unable to recover it. 00:27:38.207 [2024-05-15 07:04:52.228633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.207 [2024-05-15 07:04:52.228850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.207 [2024-05-15 07:04:52.228875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.207 [2024-05-15 07:04:52.228890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.207 [2024-05-15 07:04:52.228901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.207 [2024-05-15 07:04:52.228938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.207 qpair failed and we were unable to recover it. 00:27:38.207 [2024-05-15 07:04:52.238603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.207 [2024-05-15 07:04:52.238777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.207 [2024-05-15 07:04:52.238807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.207 [2024-05-15 07:04:52.238823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.207 [2024-05-15 07:04:52.238835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.207 [2024-05-15 07:04:52.238862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.207 qpair failed and we were unable to recover it. 00:27:38.207 [2024-05-15 07:04:52.248643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.207 [2024-05-15 07:04:52.248818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.207 [2024-05-15 07:04:52.248844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.207 [2024-05-15 07:04:52.248858] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.207 [2024-05-15 07:04:52.248870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.207 [2024-05-15 07:04:52.248897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.258659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.258836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.258862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.258876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.258887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.258915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.268736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.268964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.268992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.269007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.269022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.269052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.278713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.278889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.278915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.278936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.278956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.278985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.288743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.288925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.288957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.288972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.288984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.289012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.298738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.298927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.298960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.298975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.298987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.299015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.308765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.308942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.308968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.308983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.308994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.309021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.318797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.318969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.318994] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.319009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.319021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.319048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.328841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.329029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.329054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.329069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.329081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.329108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.338864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.339053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.339078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.339092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.339104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.339131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.348885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.349063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.349089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.349103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.349115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.349142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.358898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.359076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.359102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.359117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.359128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.359156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.369038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.369262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.369288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.369302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.369319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.369347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.379061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.379257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.379283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.379297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.379309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.208 [2024-05-15 07:04:52.379336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.208 qpair failed and we were unable to recover it. 00:27:38.208 [2024-05-15 07:04:52.389034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.208 [2024-05-15 07:04:52.389218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.208 [2024-05-15 07:04:52.389242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.208 [2024-05-15 07:04:52.389256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.208 [2024-05-15 07:04:52.389268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.209 [2024-05-15 07:04:52.389295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.209 qpair failed and we were unable to recover it. 00:27:38.209 [2024-05-15 07:04:52.399093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.209 [2024-05-15 07:04:52.399298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.209 [2024-05-15 07:04:52.399324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.209 [2024-05-15 07:04:52.399338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.209 [2024-05-15 07:04:52.399350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.209 [2024-05-15 07:04:52.399377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.209 qpair failed and we were unable to recover it. 00:27:38.209 [2024-05-15 07:04:52.409066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.209 [2024-05-15 07:04:52.409315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.209 [2024-05-15 07:04:52.409341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.209 [2024-05-15 07:04:52.409356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.209 [2024-05-15 07:04:52.409367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.209 [2024-05-15 07:04:52.409394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.209 qpair failed and we were unable to recover it. 00:27:38.209 [2024-05-15 07:04:52.419112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.209 [2024-05-15 07:04:52.419320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.209 [2024-05-15 07:04:52.419345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.209 [2024-05-15 07:04:52.419359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.209 [2024-05-15 07:04:52.419370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.209 [2024-05-15 07:04:52.419397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.209 qpair failed and we were unable to recover it. 00:27:38.209 [2024-05-15 07:04:52.429154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.209 [2024-05-15 07:04:52.429334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.209 [2024-05-15 07:04:52.429359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.209 [2024-05-15 07:04:52.429373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.209 [2024-05-15 07:04:52.429385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.209 [2024-05-15 07:04:52.429411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.209 qpair failed and we were unable to recover it. 00:27:38.209 [2024-05-15 07:04:52.439139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.209 [2024-05-15 07:04:52.439316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.209 [2024-05-15 07:04:52.439342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.209 [2024-05-15 07:04:52.439356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.209 [2024-05-15 07:04:52.439368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.209 [2024-05-15 07:04:52.439396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.209 qpair failed and we were unable to recover it. 00:27:38.468 [2024-05-15 07:04:52.449247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.468 [2024-05-15 07:04:52.449419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.468 [2024-05-15 07:04:52.449445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.468 [2024-05-15 07:04:52.449459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.449471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.449498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.459215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.459399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.459424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.459438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.459456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.459483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.469296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.469476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.469502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.469516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.469528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.469554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.479294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.479471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.479496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.479510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.479522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.479549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.489301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.489482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.489507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.489522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.489533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.489560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.499356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.499533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.499559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.499573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.499585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.499612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.509364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.509547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.509574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.509588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.509600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.509627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.519407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.519581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.519606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.519620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.519632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.519659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.529443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.529639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.529665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.529680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.529691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.529718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.539462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.539633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.539658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.539672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.539684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.539711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.549462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.549633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.549659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.549673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.549690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.549718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.559494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.559664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.559689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.559703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.559715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.559743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.569506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.569692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.569718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.569732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.569744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.569771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.579582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.579762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.579787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.579801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.469 [2024-05-15 07:04:52.579813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.469 [2024-05-15 07:04:52.579840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.469 qpair failed and we were unable to recover it. 00:27:38.469 [2024-05-15 07:04:52.589626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.469 [2024-05-15 07:04:52.589826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.469 [2024-05-15 07:04:52.589851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.469 [2024-05-15 07:04:52.589866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.589877] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.589904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.470 [2024-05-15 07:04:52.599611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.470 [2024-05-15 07:04:52.599788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.470 [2024-05-15 07:04:52.599813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.470 [2024-05-15 07:04:52.599827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.599839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.599865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.470 [2024-05-15 07:04:52.609665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.470 [2024-05-15 07:04:52.609845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.470 [2024-05-15 07:04:52.609871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.470 [2024-05-15 07:04:52.609886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.609898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.609925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.470 [2024-05-15 07:04:52.619666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.470 [2024-05-15 07:04:52.619848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.470 [2024-05-15 07:04:52.619873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.470 [2024-05-15 07:04:52.619887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.619899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.619926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.470 [2024-05-15 07:04:52.629714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.470 [2024-05-15 07:04:52.629913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.470 [2024-05-15 07:04:52.629946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.470 [2024-05-15 07:04:52.629961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.629974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.630001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.470 [2024-05-15 07:04:52.639758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.470 [2024-05-15 07:04:52.639983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.470 [2024-05-15 07:04:52.640009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.470 [2024-05-15 07:04:52.640029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.640041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.640069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.470 [2024-05-15 07:04:52.649779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.470 [2024-05-15 07:04:52.649957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.470 [2024-05-15 07:04:52.649984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.470 [2024-05-15 07:04:52.649998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.650010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.650037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.470 [2024-05-15 07:04:52.659781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.470 [2024-05-15 07:04:52.660015] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.470 [2024-05-15 07:04:52.660040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.470 [2024-05-15 07:04:52.660054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.660066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.660093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.470 [2024-05-15 07:04:52.669849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.470 [2024-05-15 07:04:52.670033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.470 [2024-05-15 07:04:52.670060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.470 [2024-05-15 07:04:52.670079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.670092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.670120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.470 [2024-05-15 07:04:52.679844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.470 [2024-05-15 07:04:52.680032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.470 [2024-05-15 07:04:52.680058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.470 [2024-05-15 07:04:52.680073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.680084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.680112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.470 [2024-05-15 07:04:52.689875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.470 [2024-05-15 07:04:52.690059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.470 [2024-05-15 07:04:52.690085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.470 [2024-05-15 07:04:52.690100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.690115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.690142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.470 [2024-05-15 07:04:52.699922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.470 [2024-05-15 07:04:52.700113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.470 [2024-05-15 07:04:52.700139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.470 [2024-05-15 07:04:52.700157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.470 [2024-05-15 07:04:52.700171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.470 [2024-05-15 07:04:52.700200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.470 qpair failed and we were unable to recover it. 00:27:38.730 [2024-05-15 07:04:52.709940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.730 [2024-05-15 07:04:52.710124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.730 [2024-05-15 07:04:52.710150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.730 [2024-05-15 07:04:52.710164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.730 [2024-05-15 07:04:52.710176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.730 [2024-05-15 07:04:52.710204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.730 qpair failed and we were unable to recover it. 00:27:38.730 [2024-05-15 07:04:52.719971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.730 [2024-05-15 07:04:52.720173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.730 [2024-05-15 07:04:52.720199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.730 [2024-05-15 07:04:52.720213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.730 [2024-05-15 07:04:52.720225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.730 [2024-05-15 07:04:52.720252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.730 qpair failed and we were unable to recover it. 00:27:38.730 [2024-05-15 07:04:52.730032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.730 [2024-05-15 07:04:52.730255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.730 [2024-05-15 07:04:52.730280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.730 [2024-05-15 07:04:52.730300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.730 [2024-05-15 07:04:52.730313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.730 [2024-05-15 07:04:52.730340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.730 qpair failed and we were unable to recover it. 00:27:38.730 [2024-05-15 07:04:52.740081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.730 [2024-05-15 07:04:52.740271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.730 [2024-05-15 07:04:52.740296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.730 [2024-05-15 07:04:52.740310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.730 [2024-05-15 07:04:52.740322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.730 [2024-05-15 07:04:52.740349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.730 qpair failed and we were unable to recover it. 00:27:38.730 [2024-05-15 07:04:52.750104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.730 [2024-05-15 07:04:52.750320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.730 [2024-05-15 07:04:52.750345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.730 [2024-05-15 07:04:52.750360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.730 [2024-05-15 07:04:52.750372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.730 [2024-05-15 07:04:52.750399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.730 qpair failed and we were unable to recover it. 00:27:38.730 [2024-05-15 07:04:52.760096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.730 [2024-05-15 07:04:52.760270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.730 [2024-05-15 07:04:52.760295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.730 [2024-05-15 07:04:52.760309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.730 [2024-05-15 07:04:52.760322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.730 [2024-05-15 07:04:52.760349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.730 qpair failed and we were unable to recover it. 00:27:38.730 [2024-05-15 07:04:52.770100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.730 [2024-05-15 07:04:52.770271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.730 [2024-05-15 07:04:52.770296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.730 [2024-05-15 07:04:52.770310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.730 [2024-05-15 07:04:52.770322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.730 [2024-05-15 07:04:52.770350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.730 qpair failed and we were unable to recover it. 00:27:38.730 [2024-05-15 07:04:52.780148] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.730 [2024-05-15 07:04:52.780325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.730 [2024-05-15 07:04:52.780350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.730 [2024-05-15 07:04:52.780365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.730 [2024-05-15 07:04:52.780377] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.730 [2024-05-15 07:04:52.780404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.730 qpair failed and we were unable to recover it. 00:27:38.730 [2024-05-15 07:04:52.790182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.730 [2024-05-15 07:04:52.790360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.730 [2024-05-15 07:04:52.790385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.730 [2024-05-15 07:04:52.790399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.730 [2024-05-15 07:04:52.790411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.730 [2024-05-15 07:04:52.790438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.730 qpair failed and we were unable to recover it. 00:27:38.730 [2024-05-15 07:04:52.800192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.730 [2024-05-15 07:04:52.800362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.730 [2024-05-15 07:04:52.800387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.730 [2024-05-15 07:04:52.800401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.730 [2024-05-15 07:04:52.800414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.730 [2024-05-15 07:04:52.800440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.730 qpair failed and we were unable to recover it. 00:27:38.730 [2024-05-15 07:04:52.810242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.730 [2024-05-15 07:04:52.810415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.730 [2024-05-15 07:04:52.810440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.810454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.810466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.810492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.820298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.820510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.820535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.820556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.820568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.820595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.830281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.830448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.830473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.830487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.830498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.830525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.840384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.840589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.840615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.840629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.840643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.840671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.850368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.850543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.850569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.850584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.850597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.850624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.860395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.860607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.860632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.860647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.860659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.860686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.870407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.870592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.870618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.870632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.870644] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.870670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.880431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.880607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.880632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.880647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.880659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.880686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.890439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.890616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.890640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.890654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.890666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.890693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.900511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.900689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.900714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.900729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.900740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.900768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.910572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.910746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.910772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.910792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.910804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.910832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.920561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.920733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.920758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.920773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.920785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.920812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.930588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.930767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.930793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.930807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.930819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.930846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.940656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.940834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.731 [2024-05-15 07:04:52.940860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.731 [2024-05-15 07:04:52.940875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.731 [2024-05-15 07:04:52.940887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.731 [2024-05-15 07:04:52.940914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.731 qpair failed and we were unable to recover it. 00:27:38.731 [2024-05-15 07:04:52.950634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.731 [2024-05-15 07:04:52.950800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.732 [2024-05-15 07:04:52.950824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.732 [2024-05-15 07:04:52.950838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.732 [2024-05-15 07:04:52.950850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.732 [2024-05-15 07:04:52.950876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.732 qpair failed and we were unable to recover it. 00:27:38.732 [2024-05-15 07:04:52.960671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.732 [2024-05-15 07:04:52.960881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.732 [2024-05-15 07:04:52.960907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.732 [2024-05-15 07:04:52.960921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.732 [2024-05-15 07:04:52.960941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.732 [2024-05-15 07:04:52.960970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.732 qpair failed and we were unable to recover it. 00:27:38.991 [2024-05-15 07:04:52.970669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.991 [2024-05-15 07:04:52.970838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.991 [2024-05-15 07:04:52.970864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.991 [2024-05-15 07:04:52.970879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.991 [2024-05-15 07:04:52.970890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.991 [2024-05-15 07:04:52.970917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.991 qpair failed and we were unable to recover it. 00:27:38.991 [2024-05-15 07:04:52.980721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.991 [2024-05-15 07:04:52.980898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.991 [2024-05-15 07:04:52.980923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.991 [2024-05-15 07:04:52.980948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.991 [2024-05-15 07:04:52.980961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.991 [2024-05-15 07:04:52.980988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.991 qpair failed and we were unable to recover it. 00:27:38.991 [2024-05-15 07:04:52.990750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.991 [2024-05-15 07:04:52.990953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.991 [2024-05-15 07:04:52.990979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.991 [2024-05-15 07:04:52.990993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.991 [2024-05-15 07:04:52.991005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.991 [2024-05-15 07:04:52.991032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.991 qpair failed and we were unable to recover it. 00:27:38.991 [2024-05-15 07:04:53.000791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.991 [2024-05-15 07:04:53.000963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.991 [2024-05-15 07:04:53.000996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.991 [2024-05-15 07:04:53.001012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.991 [2024-05-15 07:04:53.001024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.991 [2024-05-15 07:04:53.001051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.991 qpair failed and we were unable to recover it. 00:27:38.991 [2024-05-15 07:04:53.010798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.991 [2024-05-15 07:04:53.010980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.991 [2024-05-15 07:04:53.011005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.991 [2024-05-15 07:04:53.011020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.991 [2024-05-15 07:04:53.011032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.991 [2024-05-15 07:04:53.011059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.991 qpair failed and we were unable to recover it. 00:27:38.991 [2024-05-15 07:04:53.020866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.991 [2024-05-15 07:04:53.021050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.991 [2024-05-15 07:04:53.021076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.991 [2024-05-15 07:04:53.021090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.991 [2024-05-15 07:04:53.021102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.991 [2024-05-15 07:04:53.021130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.991 qpair failed and we were unable to recover it. 00:27:38.991 [2024-05-15 07:04:53.030916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.991 [2024-05-15 07:04:53.031125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.991 [2024-05-15 07:04:53.031152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.991 [2024-05-15 07:04:53.031169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.991 [2024-05-15 07:04:53.031181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.991 [2024-05-15 07:04:53.031208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.991 qpair failed and we were unable to recover it. 00:27:38.991 [2024-05-15 07:04:53.040892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.991 [2024-05-15 07:04:53.041122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.991 [2024-05-15 07:04:53.041147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.041162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.041174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.041201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.050925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.051108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.051133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.051148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.051159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.051186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.060965] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.061175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.061200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.061215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.061226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.061253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.070968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.071140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.071165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.071180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.071192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.071218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.080986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.081164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.081189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.081204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.081216] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.081242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.091017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.091210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.091241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.091256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.091268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.091295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.101093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.101278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.101302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.101316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.101328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.101356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.111091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.111264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.111290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.111304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.111316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.111343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.121127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.121312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.121338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.121352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.121364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.121391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.131157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.131333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.131358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.131373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.131384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.131417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.141210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.141390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.141415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.141430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.141441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.141469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.151271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.151451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.151477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.151491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.151503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.151531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.161245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.161420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.161446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.161461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.161473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.161500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.171263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.171495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.992 [2024-05-15 07:04:53.171522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.992 [2024-05-15 07:04:53.171541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.992 [2024-05-15 07:04:53.171553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.992 [2024-05-15 07:04:53.171581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.992 qpair failed and we were unable to recover it. 00:27:38.992 [2024-05-15 07:04:53.181340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.992 [2024-05-15 07:04:53.181538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.993 [2024-05-15 07:04:53.181569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.993 [2024-05-15 07:04:53.181584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.993 [2024-05-15 07:04:53.181596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.993 [2024-05-15 07:04:53.181623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.993 qpair failed and we were unable to recover it. 00:27:38.993 [2024-05-15 07:04:53.191305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.993 [2024-05-15 07:04:53.191531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.993 [2024-05-15 07:04:53.191557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.993 [2024-05-15 07:04:53.191572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.993 [2024-05-15 07:04:53.191584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.993 [2024-05-15 07:04:53.191612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.993 qpair failed and we were unable to recover it. 00:27:38.993 [2024-05-15 07:04:53.201375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.993 [2024-05-15 07:04:53.201553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.993 [2024-05-15 07:04:53.201578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.993 [2024-05-15 07:04:53.201593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.993 [2024-05-15 07:04:53.201605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.993 [2024-05-15 07:04:53.201632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.993 qpair failed and we were unable to recover it. 00:27:38.993 [2024-05-15 07:04:53.211379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.993 [2024-05-15 07:04:53.211585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.993 [2024-05-15 07:04:53.211610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.993 [2024-05-15 07:04:53.211625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.993 [2024-05-15 07:04:53.211637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.993 [2024-05-15 07:04:53.211663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.993 qpair failed and we were unable to recover it. 00:27:38.993 [2024-05-15 07:04:53.221438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.993 [2024-05-15 07:04:53.221659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.993 [2024-05-15 07:04:53.221684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.993 [2024-05-15 07:04:53.221700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.993 [2024-05-15 07:04:53.221712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:38.993 [2024-05-15 07:04:53.221744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.993 qpair failed and we were unable to recover it. 00:27:39.251 [2024-05-15 07:04:53.231433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.231653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.231679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.231694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.231706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.231734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.241475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.241648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.241674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.241688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.241700] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.241727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.251550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.251731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.251757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.251775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.251787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.251814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.261662] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.261838] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.261865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.261879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.261891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.261918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.271550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.271723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.271755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.271770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.271781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.271808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.281563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.281747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.281773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.281788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.281800] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.281827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.291599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.291771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.291796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.291810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.291822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.291849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.301666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.301849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.301875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.301889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.301901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.301927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.311653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.311822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.311848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.311862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.311874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.311907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.321707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.321911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.321943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.321959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.321971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.321998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.331722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.331919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.331955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.331971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.331982] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.332010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.341810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.342018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.342043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.342057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.342069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.342096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.351770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.351961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.351990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.352005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.352017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.352044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.361807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.362023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.362053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.362069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.362081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.362108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.371852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.372034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.372060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.372074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.372086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.372113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.381901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.382088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.382114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.382129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.382140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.382167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.391910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.392133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.392159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.392174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.392185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.392213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.401943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.402119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.402145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.402160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.402172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.402205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.411950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.412153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.412178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.412193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.412205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.412232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.422005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.422249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.422276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.422291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.422307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.422336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.432033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.432216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.432242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.432257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.432269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.432296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.442044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.442215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.442241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.442255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.442267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.442294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.452082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.452277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.452307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.452322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.452334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.452361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.462099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.462282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.462307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.462321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.462332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.252 [2024-05-15 07:04:53.462359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.252 qpair failed and we were unable to recover it. 00:27:39.252 [2024-05-15 07:04:53.472133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.252 [2024-05-15 07:04:53.472353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.252 [2024-05-15 07:04:53.472378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.252 [2024-05-15 07:04:53.472392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.252 [2024-05-15 07:04:53.472404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.253 [2024-05-15 07:04:53.472431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.253 qpair failed and we were unable to recover it. 00:27:39.253 [2024-05-15 07:04:53.482189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.253 [2024-05-15 07:04:53.482405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.253 [2024-05-15 07:04:53.482431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.253 [2024-05-15 07:04:53.482446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.253 [2024-05-15 07:04:53.482459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.253 [2024-05-15 07:04:53.482487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.253 qpair failed and we were unable to recover it. 00:27:39.511 [2024-05-15 07:04:53.492204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.511 [2024-05-15 07:04:53.492433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.511 [2024-05-15 07:04:53.492461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.511 [2024-05-15 07:04:53.492478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.511 [2024-05-15 07:04:53.492496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.511 [2024-05-15 07:04:53.492526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.511 qpair failed and we were unable to recover it. 00:27:39.511 [2024-05-15 07:04:53.502253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.511 [2024-05-15 07:04:53.502438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.511 [2024-05-15 07:04:53.502464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.511 [2024-05-15 07:04:53.502479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.511 [2024-05-15 07:04:53.502491] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.511 [2024-05-15 07:04:53.502518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.511 qpair failed and we were unable to recover it. 00:27:39.511 [2024-05-15 07:04:53.512226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.511 [2024-05-15 07:04:53.512406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.511 [2024-05-15 07:04:53.512432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.511 [2024-05-15 07:04:53.512446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.511 [2024-05-15 07:04:53.512458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.511 [2024-05-15 07:04:53.512485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.511 qpair failed and we were unable to recover it. 00:27:39.511 [2024-05-15 07:04:53.522291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.511 [2024-05-15 07:04:53.522467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.511 [2024-05-15 07:04:53.522493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.511 [2024-05-15 07:04:53.522510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.511 [2024-05-15 07:04:53.522522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.511 [2024-05-15 07:04:53.522549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.511 qpair failed and we were unable to recover it. 00:27:39.511 [2024-05-15 07:04:53.532292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.511 [2024-05-15 07:04:53.532471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.511 [2024-05-15 07:04:53.532496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.511 [2024-05-15 07:04:53.532511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.511 [2024-05-15 07:04:53.532522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.511 [2024-05-15 07:04:53.532549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.511 qpair failed and we were unable to recover it. 00:27:39.511 [2024-05-15 07:04:53.542345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.511 [2024-05-15 07:04:53.542542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.511 [2024-05-15 07:04:53.542567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.511 [2024-05-15 07:04:53.542582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.511 [2024-05-15 07:04:53.542594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.511 [2024-05-15 07:04:53.542621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.511 qpair failed and we were unable to recover it. 00:27:39.511 [2024-05-15 07:04:53.552372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.552552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.552577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.552592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.552603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.552630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.562369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.562546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.562571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.562585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.562597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.562624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.572398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.572575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.572600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.572614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.572626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.572652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.582492] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.582675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.582700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.582714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.582732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.582759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.592487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.592673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.592699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.592713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.592725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.592752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.602590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.602766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.602792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.602807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.602819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.602846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.612507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.612685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.612710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.612725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.612737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.612764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.622542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.622764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.622789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.622803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.622815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.622842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.632555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.632739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.632765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.632780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.632791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.632818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.642608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.642813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.642838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.642853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.642864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.642891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.652633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.652852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.652878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.652892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.652905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.652939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.662701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.662880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.662905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.512 [2024-05-15 07:04:53.662920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.512 [2024-05-15 07:04:53.662938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.512 [2024-05-15 07:04:53.662968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.512 qpair failed and we were unable to recover it. 00:27:39.512 [2024-05-15 07:04:53.672687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.512 [2024-05-15 07:04:53.672874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.512 [2024-05-15 07:04:53.672900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.513 [2024-05-15 07:04:53.672914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.513 [2024-05-15 07:04:53.672939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.513 [2024-05-15 07:04:53.672969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.513 qpair failed and we were unable to recover it. 00:27:39.513 [2024-05-15 07:04:53.682746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.513 [2024-05-15 07:04:53.682923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.513 [2024-05-15 07:04:53.682954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.513 [2024-05-15 07:04:53.682968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.513 [2024-05-15 07:04:53.682981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.513 [2024-05-15 07:04:53.683008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.513 qpair failed and we were unable to recover it. 00:27:39.513 [2024-05-15 07:04:53.692744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.513 [2024-05-15 07:04:53.692967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.513 [2024-05-15 07:04:53.692993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.513 [2024-05-15 07:04:53.693008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.513 [2024-05-15 07:04:53.693019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.513 [2024-05-15 07:04:53.693046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.513 qpair failed and we were unable to recover it. 00:27:39.513 [2024-05-15 07:04:53.702821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.513 [2024-05-15 07:04:53.703002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.513 [2024-05-15 07:04:53.703027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.513 [2024-05-15 07:04:53.703042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.513 [2024-05-15 07:04:53.703054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.513 [2024-05-15 07:04:53.703081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.513 qpair failed and we were unable to recover it. 00:27:39.513 [2024-05-15 07:04:53.712845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.513 [2024-05-15 07:04:53.713023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.513 [2024-05-15 07:04:53.713049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.513 [2024-05-15 07:04:53.713064] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.513 [2024-05-15 07:04:53.713075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.513 [2024-05-15 07:04:53.713102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.513 qpair failed and we were unable to recover it. 00:27:39.513 [2024-05-15 07:04:53.722853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.513 [2024-05-15 07:04:53.723044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.513 [2024-05-15 07:04:53.723079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.513 [2024-05-15 07:04:53.723094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.513 [2024-05-15 07:04:53.723106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.513 [2024-05-15 07:04:53.723134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.513 qpair failed and we were unable to recover it. 00:27:39.513 [2024-05-15 07:04:53.732896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.513 [2024-05-15 07:04:53.733108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.513 [2024-05-15 07:04:53.733134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.513 [2024-05-15 07:04:53.733148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.513 [2024-05-15 07:04:53.733160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.513 [2024-05-15 07:04:53.733186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.513 qpair failed and we were unable to recover it. 00:27:39.513 [2024-05-15 07:04:53.742909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.513 [2024-05-15 07:04:53.743101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.513 [2024-05-15 07:04:53.743126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.513 [2024-05-15 07:04:53.743141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.513 [2024-05-15 07:04:53.743153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.513 [2024-05-15 07:04:53.743181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.513 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.752956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.753136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.753163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.753177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.753189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.753216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.762972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.763147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.763172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.763187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.763205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.763233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.772994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.773165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.773191] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.773206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.773217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.773245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.783032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.783255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.783281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.783295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.783307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.783334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.793069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.793246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.793272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.793286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.793298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.793325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.803117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.803329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.803354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.803369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.803381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.803408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.813168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.813368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.813393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.813408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.813420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.813447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.823154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.823354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.823379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.823393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.823405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.823431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.833175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.833354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.833379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.833393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.833405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.833432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.843203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.843382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.843407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.843422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.843433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.843460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.853228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.853408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.853433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.853453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.853466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.853493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.863256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.863432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.863457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.773 [2024-05-15 07:04:53.863471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.773 [2024-05-15 07:04:53.863483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.773 [2024-05-15 07:04:53.863510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.773 qpair failed and we were unable to recover it. 00:27:39.773 [2024-05-15 07:04:53.873316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.773 [2024-05-15 07:04:53.873510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.773 [2024-05-15 07:04:53.873536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.873550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.873562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.873589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.883296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.883482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.883508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.883522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.883534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.883560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.893362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.893536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.893561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.893575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.893587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.893614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.903397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.903619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.903644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.903659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.903671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.903698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.913389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.913577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.913602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.913616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.913628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.913655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.923455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.923675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.923700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.923715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.923727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.923754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.933463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.933635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.933661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.933676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.933688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.933715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.943542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.943721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.943746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.943766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.943779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.943807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.953494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.953668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.953693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.953706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.953718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.953745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.963575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.963747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.963772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.963787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.963799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.963826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.973577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.973785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.973811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.973825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.973837] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.973864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.983579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.983755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.983780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.983794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.983806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.983833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:53.993640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:53.993814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:53.993840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:53.993853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:53.993865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:53.993893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:39.774 [2024-05-15 07:04:54.003633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.774 [2024-05-15 07:04:54.003804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.774 [2024-05-15 07:04:54.003830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.774 [2024-05-15 07:04:54.003845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.774 [2024-05-15 07:04:54.003856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:39.774 [2024-05-15 07:04:54.003883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:39.774 qpair failed and we were unable to recover it. 00:27:40.034 [2024-05-15 07:04:54.013678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.034 [2024-05-15 07:04:54.013845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.034 [2024-05-15 07:04:54.013872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.034 [2024-05-15 07:04:54.013886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.034 [2024-05-15 07:04:54.013898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.034 [2024-05-15 07:04:54.013926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.034 qpair failed and we were unable to recover it. 00:27:40.034 [2024-05-15 07:04:54.023747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.034 [2024-05-15 07:04:54.023927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.034 [2024-05-15 07:04:54.023962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.034 [2024-05-15 07:04:54.023979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.034 [2024-05-15 07:04:54.023991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.034 [2024-05-15 07:04:54.024020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.034 qpair failed and we were unable to recover it. 00:27:40.034 [2024-05-15 07:04:54.033768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.034 [2024-05-15 07:04:54.033965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.034 [2024-05-15 07:04:54.033992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.034 [2024-05-15 07:04:54.034012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.034 [2024-05-15 07:04:54.034024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.034 [2024-05-15 07:04:54.034052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.034 qpair failed and we were unable to recover it. 00:27:40.034 [2024-05-15 07:04:54.043774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.034 [2024-05-15 07:04:54.043949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.034 [2024-05-15 07:04:54.043975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.034 [2024-05-15 07:04:54.043989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.034 [2024-05-15 07:04:54.044001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.034 [2024-05-15 07:04:54.044028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.034 qpair failed and we were unable to recover it. 00:27:40.034 [2024-05-15 07:04:54.053849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.034 [2024-05-15 07:04:54.054046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.034 [2024-05-15 07:04:54.054072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.034 [2024-05-15 07:04:54.054086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.034 [2024-05-15 07:04:54.054098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.034 [2024-05-15 07:04:54.054125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.034 qpair failed and we were unable to recover it. 00:27:40.034 [2024-05-15 07:04:54.063873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.034 [2024-05-15 07:04:54.064054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.034 [2024-05-15 07:04:54.064080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.034 [2024-05-15 07:04:54.064095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.034 [2024-05-15 07:04:54.064106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.034 [2024-05-15 07:04:54.064133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.034 qpair failed and we were unable to recover it. 00:27:40.034 [2024-05-15 07:04:54.073906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.034 [2024-05-15 07:04:54.074111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.034 [2024-05-15 07:04:54.074137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.034 [2024-05-15 07:04:54.074151] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.034 [2024-05-15 07:04:54.074162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.034 [2024-05-15 07:04:54.074190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.034 qpair failed and we were unable to recover it. 00:27:40.034 [2024-05-15 07:04:54.083896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.034 [2024-05-15 07:04:54.084139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.034 [2024-05-15 07:04:54.084166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.034 [2024-05-15 07:04:54.084181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.034 [2024-05-15 07:04:54.084196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.034 [2024-05-15 07:04:54.084225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.034 qpair failed and we were unable to recover it. 00:27:40.034 [2024-05-15 07:04:54.093906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.034 [2024-05-15 07:04:54.094090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.034 [2024-05-15 07:04:54.094116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.034 [2024-05-15 07:04:54.094131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.034 [2024-05-15 07:04:54.094143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.034 [2024-05-15 07:04:54.094170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.034 qpair failed and we were unable to recover it. 00:27:40.034 [2024-05-15 07:04:54.103992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.034 [2024-05-15 07:04:54.104179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.034 [2024-05-15 07:04:54.104205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.034 [2024-05-15 07:04:54.104220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.034 [2024-05-15 07:04:54.104232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.034 [2024-05-15 07:04:54.104259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.034 qpair failed and we were unable to recover it. 00:27:40.034 [2024-05-15 07:04:54.114017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.114240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.114265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.114279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.114290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.114318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.123989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.124165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.124190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.124209] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.124222] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.124249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.134059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.134230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.134256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.134270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.134282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.134309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.144073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.144289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.144314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.144329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.144340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.144367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.154109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.154328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.154354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.154368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.154380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.154407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.164255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.164466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.164491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.164506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.164518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.164544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.174203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.174389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.174415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.174429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.174440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.174467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.184247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.184465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.184490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.184504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.184517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.184544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.194263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.194433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.194459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.194473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.194485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.194511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.204263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.204445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.204471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.204485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.204496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.204523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.214378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.214556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.214586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.214601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.214613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.214640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.224335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.035 [2024-05-15 07:04:54.224528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.035 [2024-05-15 07:04:54.224553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.035 [2024-05-15 07:04:54.224567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.035 [2024-05-15 07:04:54.224579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.035 [2024-05-15 07:04:54.224606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.035 qpair failed and we were unable to recover it. 00:27:40.035 [2024-05-15 07:04:54.234334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.036 [2024-05-15 07:04:54.234520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.036 [2024-05-15 07:04:54.234545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.036 [2024-05-15 07:04:54.234559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.036 [2024-05-15 07:04:54.234571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.036 [2024-05-15 07:04:54.234598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.036 qpair failed and we were unable to recover it. 00:27:40.036 [2024-05-15 07:04:54.244409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.036 [2024-05-15 07:04:54.244585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.036 [2024-05-15 07:04:54.244610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.036 [2024-05-15 07:04:54.244624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.036 [2024-05-15 07:04:54.244636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.036 [2024-05-15 07:04:54.244662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.036 qpair failed and we were unable to recover it. 00:27:40.036 [2024-05-15 07:04:54.254414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.036 [2024-05-15 07:04:54.254584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.036 [2024-05-15 07:04:54.254609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.036 [2024-05-15 07:04:54.254623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.036 [2024-05-15 07:04:54.254635] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.036 [2024-05-15 07:04:54.254662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.036 qpair failed and we were unable to recover it. 00:27:40.036 [2024-05-15 07:04:54.264437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.036 [2024-05-15 07:04:54.264626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.036 [2024-05-15 07:04:54.264651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.036 [2024-05-15 07:04:54.264666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.036 [2024-05-15 07:04:54.264677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.036 [2024-05-15 07:04:54.264704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.036 qpair failed and we were unable to recover it. 00:27:40.299 [2024-05-15 07:04:54.274495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.299 [2024-05-15 07:04:54.274679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.299 [2024-05-15 07:04:54.274704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.299 [2024-05-15 07:04:54.274719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.299 [2024-05-15 07:04:54.274731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.299 [2024-05-15 07:04:54.274758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.299 qpair failed and we were unable to recover it. 00:27:40.299 [2024-05-15 07:04:54.284516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.299 [2024-05-15 07:04:54.284719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.299 [2024-05-15 07:04:54.284744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.299 [2024-05-15 07:04:54.284759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.299 [2024-05-15 07:04:54.284770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.299 [2024-05-15 07:04:54.284797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.299 qpair failed and we were unable to recover it. 00:27:40.299 [2024-05-15 07:04:54.294549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.299 [2024-05-15 07:04:54.294767] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.299 [2024-05-15 07:04:54.294792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.299 [2024-05-15 07:04:54.294807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.299 [2024-05-15 07:04:54.294819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.299 [2024-05-15 07:04:54.294846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.299 qpair failed and we were unable to recover it. 00:27:40.299 [2024-05-15 07:04:54.304591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.299 [2024-05-15 07:04:54.304769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.299 [2024-05-15 07:04:54.304799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.299 [2024-05-15 07:04:54.304815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.299 [2024-05-15 07:04:54.304827] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.299 [2024-05-15 07:04:54.304854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.299 qpair failed and we were unable to recover it. 00:27:40.299 [2024-05-15 07:04:54.314630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.299 [2024-05-15 07:04:54.314804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.299 [2024-05-15 07:04:54.314830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.299 [2024-05-15 07:04:54.314844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.299 [2024-05-15 07:04:54.314856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.299 [2024-05-15 07:04:54.314883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.299 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.324586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.324768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.324793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.300 [2024-05-15 07:04:54.324808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.300 [2024-05-15 07:04:54.324820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.300 [2024-05-15 07:04:54.324847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.300 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.334643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.334823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.334859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.300 [2024-05-15 07:04:54.334874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.300 [2024-05-15 07:04:54.334886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.300 [2024-05-15 07:04:54.334913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.300 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.344718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.344956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.344985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.300 [2024-05-15 07:04:54.345000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.300 [2024-05-15 07:04:54.345012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.300 [2024-05-15 07:04:54.345046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.300 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.354703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.354876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.354902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.300 [2024-05-15 07:04:54.354917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.300 [2024-05-15 07:04:54.354937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.300 [2024-05-15 07:04:54.354966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.300 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.364744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.364920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.364962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.300 [2024-05-15 07:04:54.364977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.300 [2024-05-15 07:04:54.364990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.300 [2024-05-15 07:04:54.365018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.300 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.374754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.374972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.374997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.300 [2024-05-15 07:04:54.375012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.300 [2024-05-15 07:04:54.375024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.300 [2024-05-15 07:04:54.375051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.300 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.384807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.384998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.385024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.300 [2024-05-15 07:04:54.385039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.300 [2024-05-15 07:04:54.385051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.300 [2024-05-15 07:04:54.385079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.300 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.394832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.395037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.395069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.300 [2024-05-15 07:04:54.395084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.300 [2024-05-15 07:04:54.395096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.300 [2024-05-15 07:04:54.395123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.300 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.404863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.405036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.405062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.300 [2024-05-15 07:04:54.405077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.300 [2024-05-15 07:04:54.405089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.300 [2024-05-15 07:04:54.405116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.300 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.414873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.415050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.415076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.300 [2024-05-15 07:04:54.415090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.300 [2024-05-15 07:04:54.415102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.300 [2024-05-15 07:04:54.415129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.300 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.424915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.425102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.425127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.300 [2024-05-15 07:04:54.425141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.300 [2024-05-15 07:04:54.425153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.300 [2024-05-15 07:04:54.425180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.300 qpair failed and we were unable to recover it. 00:27:40.300 [2024-05-15 07:04:54.434959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.300 [2024-05-15 07:04:54.435140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.300 [2024-05-15 07:04:54.435165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.301 [2024-05-15 07:04:54.435179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.301 [2024-05-15 07:04:54.435190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.301 [2024-05-15 07:04:54.435223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.301 qpair failed and we were unable to recover it. 00:27:40.301 [2024-05-15 07:04:54.444990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.301 [2024-05-15 07:04:54.445174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.301 [2024-05-15 07:04:54.445203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.301 [2024-05-15 07:04:54.445218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.301 [2024-05-15 07:04:54.445230] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.301 [2024-05-15 07:04:54.445257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.301 qpair failed and we were unable to recover it. 00:27:40.301 [2024-05-15 07:04:54.455037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.301 [2024-05-15 07:04:54.455217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.301 [2024-05-15 07:04:54.455243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.301 [2024-05-15 07:04:54.455258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.301 [2024-05-15 07:04:54.455273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.301 [2024-05-15 07:04:54.455300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.301 qpair failed and we were unable to recover it. 00:27:40.301 [2024-05-15 07:04:54.465052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.301 [2024-05-15 07:04:54.465237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.301 [2024-05-15 07:04:54.465263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.301 [2024-05-15 07:04:54.465277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.301 [2024-05-15 07:04:54.465289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.301 [2024-05-15 07:04:54.465317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.301 qpair failed and we were unable to recover it. 00:27:40.301 [2024-05-15 07:04:54.475055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.301 [2024-05-15 07:04:54.475239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.301 [2024-05-15 07:04:54.475265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.301 [2024-05-15 07:04:54.475279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.301 [2024-05-15 07:04:54.475291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.301 [2024-05-15 07:04:54.475318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.301 qpair failed and we were unable to recover it. 00:27:40.301 [2024-05-15 07:04:54.485079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.301 [2024-05-15 07:04:54.485262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.301 [2024-05-15 07:04:54.485294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.301 [2024-05-15 07:04:54.485309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.301 [2024-05-15 07:04:54.485321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.301 [2024-05-15 07:04:54.485348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.301 qpair failed and we were unable to recover it. 00:27:40.301 [2024-05-15 07:04:54.495147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.301 [2024-05-15 07:04:54.495321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.301 [2024-05-15 07:04:54.495346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.301 [2024-05-15 07:04:54.495360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.301 [2024-05-15 07:04:54.495372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.301 [2024-05-15 07:04:54.495399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.301 qpair failed and we were unable to recover it. 00:27:40.301 [2024-05-15 07:04:54.505155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.301 [2024-05-15 07:04:54.505334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.301 [2024-05-15 07:04:54.505359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.301 [2024-05-15 07:04:54.505373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.301 [2024-05-15 07:04:54.505385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.301 [2024-05-15 07:04:54.505412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.301 qpair failed and we were unable to recover it. 00:27:40.301 [2024-05-15 07:04:54.515163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.301 [2024-05-15 07:04:54.515340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.301 [2024-05-15 07:04:54.515366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.301 [2024-05-15 07:04:54.515381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.301 [2024-05-15 07:04:54.515393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.301 [2024-05-15 07:04:54.515420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.301 qpair failed and we were unable to recover it. 00:27:40.301 [2024-05-15 07:04:54.525259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.301 [2024-05-15 07:04:54.525489] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.301 [2024-05-15 07:04:54.525518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.301 [2024-05-15 07:04:54.525533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.301 [2024-05-15 07:04:54.525549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.301 [2024-05-15 07:04:54.525583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.301 qpair failed and we were unable to recover it. 00:27:40.563 [2024-05-15 07:04:54.535225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.563 [2024-05-15 07:04:54.535404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.563 [2024-05-15 07:04:54.535430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.563 [2024-05-15 07:04:54.535448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.563 [2024-05-15 07:04:54.535460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.563 [2024-05-15 07:04:54.535488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.563 qpair failed and we were unable to recover it. 00:27:40.563 [2024-05-15 07:04:54.545308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.563 [2024-05-15 07:04:54.545485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.563 [2024-05-15 07:04:54.545511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.563 [2024-05-15 07:04:54.545525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.563 [2024-05-15 07:04:54.545537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.563 [2024-05-15 07:04:54.545564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.563 qpair failed and we were unable to recover it. 00:27:40.563 [2024-05-15 07:04:54.555337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.563 [2024-05-15 07:04:54.555518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.563 [2024-05-15 07:04:54.555544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.563 [2024-05-15 07:04:54.555558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.563 [2024-05-15 07:04:54.555570] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.563 [2024-05-15 07:04:54.555597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.563 qpair failed and we were unable to recover it. 00:27:40.563 [2024-05-15 07:04:54.565382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.563 [2024-05-15 07:04:54.565553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.563 [2024-05-15 07:04:54.565578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.563 [2024-05-15 07:04:54.565592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.563 [2024-05-15 07:04:54.565605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.563 [2024-05-15 07:04:54.565632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.563 qpair failed and we were unable to recover it. 00:27:40.563 [2024-05-15 07:04:54.575351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.563 [2024-05-15 07:04:54.575520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.563 [2024-05-15 07:04:54.575551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.563 [2024-05-15 07:04:54.575566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.563 [2024-05-15 07:04:54.575578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.563 [2024-05-15 07:04:54.575605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.563 qpair failed and we were unable to recover it. 00:27:40.563 [2024-05-15 07:04:54.585391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.563 [2024-05-15 07:04:54.585564] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.563 [2024-05-15 07:04:54.585589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.563 [2024-05-15 07:04:54.585603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.563 [2024-05-15 07:04:54.585615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.563 [2024-05-15 07:04:54.585642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.563 qpair failed and we were unable to recover it. 00:27:40.563 [2024-05-15 07:04:54.595431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.563 [2024-05-15 07:04:54.595617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.563 [2024-05-15 07:04:54.595642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.563 [2024-05-15 07:04:54.595657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.563 [2024-05-15 07:04:54.595668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.563 [2024-05-15 07:04:54.595696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.563 qpair failed and we were unable to recover it. 00:27:40.563 [2024-05-15 07:04:54.605469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.563 [2024-05-15 07:04:54.605654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.563 [2024-05-15 07:04:54.605679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.563 [2024-05-15 07:04:54.605694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.563 [2024-05-15 07:04:54.605705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.563 [2024-05-15 07:04:54.605732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.563 qpair failed and we were unable to recover it. 00:27:40.563 [2024-05-15 07:04:54.615452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.563 [2024-05-15 07:04:54.615618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.563 [2024-05-15 07:04:54.615644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.563 [2024-05-15 07:04:54.615658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.563 [2024-05-15 07:04:54.615675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.563 [2024-05-15 07:04:54.615703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.563 qpair failed and we were unable to recover it. 00:27:40.563 [2024-05-15 07:04:54.625506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.563 [2024-05-15 07:04:54.625679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.625704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.625718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.625730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.625757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.635551] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.564 [2024-05-15 07:04:54.635728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.635753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.635767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.635779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.635806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.645578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.564 [2024-05-15 07:04:54.645757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.645783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.645797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.645809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.645836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.655573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.564 [2024-05-15 07:04:54.655740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.655766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.655780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.655792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.655818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.665662] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.564 [2024-05-15 07:04:54.665915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.665952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.665968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.665980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.666008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.675675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.564 [2024-05-15 07:04:54.675871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.675897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.675912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.675924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.675959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.685709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.564 [2024-05-15 07:04:54.685888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.685914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.685934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.685948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.685976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.695696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.564 [2024-05-15 07:04:54.695871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.695896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.695910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.695922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.695957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.705718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.564 [2024-05-15 07:04:54.705893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.705918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.705940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.705959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.705987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.715780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.564 [2024-05-15 07:04:54.715976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.716002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.716017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.716028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.716055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.725793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.564 [2024-05-15 07:04:54.725976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.726001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.726016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.726027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.726055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.735833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.564 [2024-05-15 07:04:54.736004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.564 [2024-05-15 07:04:54.736030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.564 [2024-05-15 07:04:54.736044] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.564 [2024-05-15 07:04:54.736056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.564 [2024-05-15 07:04:54.736083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.564 qpair failed and we were unable to recover it. 00:27:40.564 [2024-05-15 07:04:54.745853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.565 [2024-05-15 07:04:54.746043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.565 [2024-05-15 07:04:54.746069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.565 [2024-05-15 07:04:54.746088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.565 [2024-05-15 07:04:54.746100] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.565 [2024-05-15 07:04:54.746129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.565 qpair failed and we were unable to recover it. 00:27:40.565 [2024-05-15 07:04:54.755852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.565 [2024-05-15 07:04:54.756044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.565 [2024-05-15 07:04:54.756071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.565 [2024-05-15 07:04:54.756086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.565 [2024-05-15 07:04:54.756098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.565 [2024-05-15 07:04:54.756125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.565 qpair failed and we were unable to recover it. 00:27:40.565 [2024-05-15 07:04:54.765897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.565 [2024-05-15 07:04:54.766077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.565 [2024-05-15 07:04:54.766102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.565 [2024-05-15 07:04:54.766117] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.565 [2024-05-15 07:04:54.766129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.565 [2024-05-15 07:04:54.766156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.565 qpair failed and we were unable to recover it. 00:27:40.565 [2024-05-15 07:04:54.775915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.565 [2024-05-15 07:04:54.776091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.565 [2024-05-15 07:04:54.776117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.565 [2024-05-15 07:04:54.776131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.565 [2024-05-15 07:04:54.776143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.565 [2024-05-15 07:04:54.776169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.565 qpair failed and we were unable to recover it. 00:27:40.565 [2024-05-15 07:04:54.785980] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.565 [2024-05-15 07:04:54.786156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.565 [2024-05-15 07:04:54.786182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.565 [2024-05-15 07:04:54.786197] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.565 [2024-05-15 07:04:54.786209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.565 [2024-05-15 07:04:54.786236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.565 qpair failed and we were unable to recover it. 00:27:40.565 [2024-05-15 07:04:54.795999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.826 [2024-05-15 07:04:54.796209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.826 [2024-05-15 07:04:54.796237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.826 [2024-05-15 07:04:54.796253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.826 [2024-05-15 07:04:54.796273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.826 [2024-05-15 07:04:54.796302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.826 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.806033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.806230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.806255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.806270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.806281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.806309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.816092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.816270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.816297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.816311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.816326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.816353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.826096] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.826286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.826311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.826325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.826337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.826364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.836137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.836348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.836374] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.836389] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.836400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.836427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.846154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.846340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.846366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.846381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.846393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.846420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.856197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.856365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.856391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.856405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.856417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.856444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.866230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.866408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.866433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.866447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.866458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.866486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.876247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.876424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.876449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.876463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.876475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.876502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.886249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.886418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.886443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.886458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.886476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.886503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.896296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.896466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.896492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.896506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.896518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.896544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.906333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.906513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.906539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.906553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.906565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.906592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.916349] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.916566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.916592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.916606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.916618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.916645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.926402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.926573] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.926598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.926612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.926624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.827 [2024-05-15 07:04:54.926651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.827 qpair failed and we were unable to recover it. 00:27:40.827 [2024-05-15 07:04:54.936390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.827 [2024-05-15 07:04:54.936569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.827 [2024-05-15 07:04:54.936594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.827 [2024-05-15 07:04:54.936609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.827 [2024-05-15 07:04:54.936621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:54.936647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:54.946477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:54.946654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:54.946680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:54.946699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:54.946712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:54.946739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:54.956453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:54.956631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:54.956656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:54.956670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:54.956681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:54.956708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:54.966505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:54.966677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:54.966703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:54.966718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:54.966730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:54.966756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:54.976509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:54.976679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:54.976705] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:54.976725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:54.976738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:54.976765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:54.986573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:54.986748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:54.986774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:54.986788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:54.986800] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:54.986826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:54.996565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:54.996737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:54.996762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:54.996776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:54.996788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:54.996814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:55.006718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:55.006919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:55.006951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:55.006967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:55.006979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:55.007006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:55.016626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:55.016828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:55.016853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:55.016867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:55.016879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:55.016906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:55.026686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:55.026862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:55.026888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:55.026902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:55.026914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:55.026948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:55.036684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:55.036910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:55.036942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:55.036959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:55.036971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:55.036998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:55.046713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:55.046890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:55.046915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:55.046935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:55.046949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:55.046977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:40.828 [2024-05-15 07:04:55.056737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:40.828 [2024-05-15 07:04:55.056911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:40.828 [2024-05-15 07:04:55.056942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:40.828 [2024-05-15 07:04:55.056959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:40.828 [2024-05-15 07:04:55.056971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:40.828 [2024-05-15 07:04:55.057004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:40.828 qpair failed and we were unable to recover it. 00:27:41.090 [2024-05-15 07:04:55.066773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.090 [2024-05-15 07:04:55.066954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.090 [2024-05-15 07:04:55.066980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.090 [2024-05-15 07:04:55.067001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.090 [2024-05-15 07:04:55.067013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.090 [2024-05-15 07:04:55.067041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.090 qpair failed and we were unable to recover it. 00:27:41.090 [2024-05-15 07:04:55.076845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.090 [2024-05-15 07:04:55.077053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.090 [2024-05-15 07:04:55.077079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.090 [2024-05-15 07:04:55.077093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.090 [2024-05-15 07:04:55.077105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.090 [2024-05-15 07:04:55.077132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.090 qpair failed and we were unable to recover it. 00:27:41.090 [2024-05-15 07:04:55.086835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.090 [2024-05-15 07:04:55.087024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.090 [2024-05-15 07:04:55.087049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.090 [2024-05-15 07:04:55.087063] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.090 [2024-05-15 07:04:55.087075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.090 [2024-05-15 07:04:55.087102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.090 qpair failed and we were unable to recover it. 00:27:41.090 [2024-05-15 07:04:55.096881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.090 [2024-05-15 07:04:55.097064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.090 [2024-05-15 07:04:55.097090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.090 [2024-05-15 07:04:55.097104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.090 [2024-05-15 07:04:55.097116] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.090 [2024-05-15 07:04:55.097144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.090 qpair failed and we were unable to recover it. 00:27:41.090 [2024-05-15 07:04:55.106892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.090 [2024-05-15 07:04:55.107074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.090 [2024-05-15 07:04:55.107100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.090 [2024-05-15 07:04:55.107114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.090 [2024-05-15 07:04:55.107126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.090 [2024-05-15 07:04:55.107153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.090 qpair failed and we were unable to recover it. 00:27:41.090 [2024-05-15 07:04:55.116953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.117156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.117182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.117199] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.117211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.117238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.126981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.127157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.127183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.127198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.127209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.127237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.137007] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.137224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.137249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.137264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.137276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.137302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.147032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.147211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.147236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.147250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.147262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.147290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.157063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.157285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.157311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.157331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.157343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.157370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.167065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.167242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.167267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.167282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.167294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.167320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.177125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.177304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.177329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.177345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.177357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.177383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.187167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.187343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.187369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.187383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.187395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.187422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.197181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.197358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.197384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.197398] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.197410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.197437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.207204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.207378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.207403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.207417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.207429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.207456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.217248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.217420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.217445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.217459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.217471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.217498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.227252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.227429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.227454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.227468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.227480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.227507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.237291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.237470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.237495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.237509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.091 [2024-05-15 07:04:55.237521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.091 [2024-05-15 07:04:55.237547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.091 qpair failed and we were unable to recover it. 00:27:41.091 [2024-05-15 07:04:55.247307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.091 [2024-05-15 07:04:55.247517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.091 [2024-05-15 07:04:55.247543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.091 [2024-05-15 07:04:55.247568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.092 [2024-05-15 07:04:55.247581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.092 [2024-05-15 07:04:55.247610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.092 qpair failed and we were unable to recover it. 00:27:41.092 [2024-05-15 07:04:55.257336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.092 [2024-05-15 07:04:55.257547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.092 [2024-05-15 07:04:55.257574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.092 [2024-05-15 07:04:55.257589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.092 [2024-05-15 07:04:55.257604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.092 [2024-05-15 07:04:55.257633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.092 qpair failed and we were unable to recover it. 00:27:41.092 [2024-05-15 07:04:55.267390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.092 [2024-05-15 07:04:55.267563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.092 [2024-05-15 07:04:55.267589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.092 [2024-05-15 07:04:55.267603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.092 [2024-05-15 07:04:55.267615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.092 [2024-05-15 07:04:55.267641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.092 qpair failed and we were unable to recover it. 00:27:41.092 [2024-05-15 07:04:55.277418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.092 [2024-05-15 07:04:55.277589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.092 [2024-05-15 07:04:55.277614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.092 [2024-05-15 07:04:55.277629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.092 [2024-05-15 07:04:55.277641] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.092 [2024-05-15 07:04:55.277668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.092 qpair failed and we were unable to recover it. 00:27:41.092 [2024-05-15 07:04:55.287410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.092 [2024-05-15 07:04:55.287587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.092 [2024-05-15 07:04:55.287613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.092 [2024-05-15 07:04:55.287627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.092 [2024-05-15 07:04:55.287639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.092 [2024-05-15 07:04:55.287666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.092 qpair failed and we were unable to recover it. 00:27:41.092 [2024-05-15 07:04:55.297458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.092 [2024-05-15 07:04:55.297634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.092 [2024-05-15 07:04:55.297659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.092 [2024-05-15 07:04:55.297673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.092 [2024-05-15 07:04:55.297685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.092 [2024-05-15 07:04:55.297712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.092 qpair failed and we were unable to recover it. 00:27:41.092 [2024-05-15 07:04:55.307505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.092 [2024-05-15 07:04:55.307677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.092 [2024-05-15 07:04:55.307702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.092 [2024-05-15 07:04:55.307716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.092 [2024-05-15 07:04:55.307728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.092 [2024-05-15 07:04:55.307756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.092 qpair failed and we were unable to recover it. 00:27:41.092 [2024-05-15 07:04:55.317512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.092 [2024-05-15 07:04:55.317687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.092 [2024-05-15 07:04:55.317713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.092 [2024-05-15 07:04:55.317728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.092 [2024-05-15 07:04:55.317740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.092 [2024-05-15 07:04:55.317768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.092 qpair failed and we were unable to recover it. 00:27:41.351 [2024-05-15 07:04:55.327544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.327717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.327743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.327757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.327769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.327796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.337581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.337752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.337777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.337799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.337811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.337839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.347647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.347858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.347884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.347899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.347910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.347945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.357703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.357985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.358012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.358026] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.358038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.358066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.367706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.367893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.367919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.367939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.367953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.367980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.377686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.377852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.377878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.377892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.377905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.377941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.387776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.387961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.387987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.388001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.388013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.388040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.397747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.397922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.397953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.397968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.397980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.398006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.407794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.408013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.408039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.408054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.408065] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.408093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.417842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.418047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.418073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.418087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.418099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.418127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.427877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.428064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.428095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.428111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.428123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.428150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.437985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.438206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.438233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.438248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.438261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.438289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.352 qpair failed and we were unable to recover it. 00:27:41.352 [2024-05-15 07:04:55.447912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.352 [2024-05-15 07:04:55.448137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.352 [2024-05-15 07:04:55.448162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.352 [2024-05-15 07:04:55.448177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.352 [2024-05-15 07:04:55.448189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.352 [2024-05-15 07:04:55.448216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.458028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.458205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.458230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.458244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.458256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.458284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.467982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.468187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.468213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.468235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.468248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.468276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.477990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.478168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.478194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.478208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.478221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.478248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.488109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.488285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.488313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.488329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.488341] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.488370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.498082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.498262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.498294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.498309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.498321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.498347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.508099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.508320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.508346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.508360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.508372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.508399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.518191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.518383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.518414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.518429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.518441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.518469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.528143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.528320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.528346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.528371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.528383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.528410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.538155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.538337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.538363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.538378] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.538390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.538417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.548223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.548440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.548468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.548483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.548496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.548523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.558200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.558378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.558404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.558418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.558430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.558462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.568256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.568426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.568452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.568467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.568479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.568506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.353 [2024-05-15 07:04:55.578263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.353 [2024-05-15 07:04:55.578442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.353 [2024-05-15 07:04:55.578478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.353 [2024-05-15 07:04:55.578492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.353 [2024-05-15 07:04:55.578504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.353 [2024-05-15 07:04:55.578530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.353 qpair failed and we were unable to recover it. 00:27:41.612 [2024-05-15 07:04:55.588301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.588473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.588499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.588513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.588525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.588552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.598338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.598513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.598539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.598554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.598566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.598593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.608399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.608581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.608612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.608628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.608640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.608667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.618468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.618666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.618692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.618706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.618717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.618745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.628432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.628604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.628629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.628643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.628655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.628683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.638533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.638714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.638741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.638759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.638771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.638800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.648504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.648683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.648709] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.648723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.648735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.648768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.658535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.658739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.658765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.658780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.658791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.658819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.668539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.668713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.668739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.668753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.668765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.668792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.678612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.678785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.678811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.678826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.678838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.678865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.688605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.688776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.688801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.688815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.688827] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.688854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.698608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.698783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.698813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.698829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.698841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.698867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.708670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.708870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.613 [2024-05-15 07:04:55.708895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.613 [2024-05-15 07:04:55.708909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.613 [2024-05-15 07:04:55.708921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.613 [2024-05-15 07:04:55.708956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.613 qpair failed and we were unable to recover it. 00:27:41.613 [2024-05-15 07:04:55.718716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.613 [2024-05-15 07:04:55.718943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.718969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.718984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.718995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.719023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.728727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.728910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.728941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.728957] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.728969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.728996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.738716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.738890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.738915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.738937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.738952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.738985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.748784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.748999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.749024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.749038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.749050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.749077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.758814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.759000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.759027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.759046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.759059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.759087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.768849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.769051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.769078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.769092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.769104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.769131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.778872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.779069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.779095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.779109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.779121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.779148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.788885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.789066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.789096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.789111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.789123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.789151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.798935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.799109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.799134] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.799149] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.799160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.799188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.808941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.809155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.809180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.809195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.809206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.809233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.818994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.819164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.819189] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.819204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.819216] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.819243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.829039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.829212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.829238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.829252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.829269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.829298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.614 [2024-05-15 07:04:55.839042] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.614 [2024-05-15 07:04:55.839225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.614 [2024-05-15 07:04:55.839250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.614 [2024-05-15 07:04:55.839264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.614 [2024-05-15 07:04:55.839276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.614 [2024-05-15 07:04:55.839303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.614 qpair failed and we were unable to recover it. 00:27:41.874 [2024-05-15 07:04:55.849062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.874 [2024-05-15 07:04:55.849234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.874 [2024-05-15 07:04:55.849260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.874 [2024-05-15 07:04:55.849275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.874 [2024-05-15 07:04:55.849287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.874 [2024-05-15 07:04:55.849314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.874 qpair failed and we were unable to recover it. 00:27:41.874 [2024-05-15 07:04:55.859112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.874 [2024-05-15 07:04:55.859292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.874 [2024-05-15 07:04:55.859318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.874 [2024-05-15 07:04:55.859332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.874 [2024-05-15 07:04:55.859344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.874 [2024-05-15 07:04:55.859371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.874 qpair failed and we were unable to recover it. 00:27:41.874 [2024-05-15 07:04:55.869128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.874 [2024-05-15 07:04:55.869308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.874 [2024-05-15 07:04:55.869332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.874 [2024-05-15 07:04:55.869347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.874 [2024-05-15 07:04:55.869359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.874 [2024-05-15 07:04:55.869386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.874 qpair failed and we were unable to recover it. 00:27:41.874 [2024-05-15 07:04:55.879195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.874 [2024-05-15 07:04:55.879368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.874 [2024-05-15 07:04:55.879399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.874 [2024-05-15 07:04:55.879414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.874 [2024-05-15 07:04:55.879426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.874 [2024-05-15 07:04:55.879453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.874 qpair failed and we were unable to recover it. 00:27:41.874 [2024-05-15 07:04:55.889211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.874 [2024-05-15 07:04:55.889415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.874 [2024-05-15 07:04:55.889440] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.874 [2024-05-15 07:04:55.889455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.874 [2024-05-15 07:04:55.889466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.874 [2024-05-15 07:04:55.889493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.874 qpair failed and we were unable to recover it. 00:27:41.874 [2024-05-15 07:04:55.899242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.874 [2024-05-15 07:04:55.899424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.874 [2024-05-15 07:04:55.899450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.874 [2024-05-15 07:04:55.899464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.874 [2024-05-15 07:04:55.899476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.874 [2024-05-15 07:04:55.899503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.874 qpair failed and we were unable to recover it. 00:27:41.874 [2024-05-15 07:04:55.909260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.874 [2024-05-15 07:04:55.909434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.874 [2024-05-15 07:04:55.909460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.874 [2024-05-15 07:04:55.909475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.874 [2024-05-15 07:04:55.909486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.874 [2024-05-15 07:04:55.909514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.874 qpair failed and we were unable to recover it. 00:27:41.874 [2024-05-15 07:04:55.919292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.874 [2024-05-15 07:04:55.919504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.874 [2024-05-15 07:04:55.919529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.874 [2024-05-15 07:04:55.919543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.874 [2024-05-15 07:04:55.919560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.874 [2024-05-15 07:04:55.919588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.874 qpair failed and we were unable to recover it. 00:27:41.874 [2024-05-15 07:04:55.929343] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.874 [2024-05-15 07:04:55.929513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.874 [2024-05-15 07:04:55.929538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.874 [2024-05-15 07:04:55.929553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.874 [2024-05-15 07:04:55.929565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.874 [2024-05-15 07:04:55.929591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.874 qpair failed and we were unable to recover it. 00:27:41.874 [2024-05-15 07:04:55.939329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.874 [2024-05-15 07:04:55.939501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.874 [2024-05-15 07:04:55.939526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.874 [2024-05-15 07:04:55.939540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.874 [2024-05-15 07:04:55.939553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.874 [2024-05-15 07:04:55.939579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.874 qpair failed and we were unable to recover it. 00:27:41.874 [2024-05-15 07:04:55.949364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.874 [2024-05-15 07:04:55.949589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.874 [2024-05-15 07:04:55.949614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.874 [2024-05-15 07:04:55.949629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.874 [2024-05-15 07:04:55.949640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.874 [2024-05-15 07:04:55.949667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:55.959400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:55.959584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:55.959608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:55.959621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:55.959633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:55.959660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:55.969413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:55.969590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:55.969615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:55.969634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:55.969646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:55.969673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:55.979475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:55.979643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:55.979668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:55.979682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:55.979694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:55.979720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:55.989509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:55.989681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:55.989706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:55.989720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:55.989732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:55.989759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:55.999543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:55.999761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:55.999787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:55.999802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:55.999814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:55.999840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:56.009599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:56.009834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:56.009859] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:56.009873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:56.009894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:56.009922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:56.019560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:56.019746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:56.019772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:56.019786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:56.019798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:56.019825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:56.029615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:56.029815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:56.029840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:56.029854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:56.029866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:56.029893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:56.039635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:56.039813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:56.039838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:56.039852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:56.039865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:56.039891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:56.049681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:56.049850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:56.049875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:56.049889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:56.049901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:56.049928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:56.059670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:56.059852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:56.059878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:56.059892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:56.059904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:56.059939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:56.069723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:56.069900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:56.069925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:56.069947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:56.069960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:56.069988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:56.079818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:56.080026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:56.080052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:56.080067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.875 [2024-05-15 07:04:56.080079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.875 [2024-05-15 07:04:56.080106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.875 qpair failed and we were unable to recover it. 00:27:41.875 [2024-05-15 07:04:56.089848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.875 [2024-05-15 07:04:56.090053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.875 [2024-05-15 07:04:56.090079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.875 [2024-05-15 07:04:56.090093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.876 [2024-05-15 07:04:56.090106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.876 [2024-05-15 07:04:56.090133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.876 qpair failed and we were unable to recover it. 00:27:41.876 [2024-05-15 07:04:56.099811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:41.876 [2024-05-15 07:04:56.099991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:41.876 [2024-05-15 07:04:56.100017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:41.876 [2024-05-15 07:04:56.100031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:41.876 [2024-05-15 07:04:56.100048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:41.876 [2024-05-15 07:04:56.100077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:41.876 qpair failed and we were unable to recover it. 00:27:42.136 [2024-05-15 07:04:56.109854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.136 [2024-05-15 07:04:56.110039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.136 [2024-05-15 07:04:56.110065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.136 [2024-05-15 07:04:56.110079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.110091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.110118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.119872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.120050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.120076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.120091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.120103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.120130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.129902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.130081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.130107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.130122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.130133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.130161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.139949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.140167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.140193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.140207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.140220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.140246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.149981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.150177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.150202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.150216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.150228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.150255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.160020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.160190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.160215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.160230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.160242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.160269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.170055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.170232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.170258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.170272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.170284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.170312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.180101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.180339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.180365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.180379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.180391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.180418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.190077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.190250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.190276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.190296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.190308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.190335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.200116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.200337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.200363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.200378] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.200389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.200416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.210157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.210330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.210356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.210371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.210382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.210410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.220181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.220352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.220378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.220393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.220404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.220431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.230213] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.230430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.230456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.230470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.230482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.230509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.240271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.240483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.137 [2024-05-15 07:04:56.240510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.137 [2024-05-15 07:04:56.240524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.137 [2024-05-15 07:04:56.240540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.137 [2024-05-15 07:04:56.240569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.137 qpair failed and we were unable to recover it. 00:27:42.137 [2024-05-15 07:04:56.250233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.137 [2024-05-15 07:04:56.250410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.250435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.250449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.250461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.250488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.138 [2024-05-15 07:04:56.260276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.138 [2024-05-15 07:04:56.260449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.260475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.260490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.260501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.260529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.138 [2024-05-15 07:04:56.270350] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.138 [2024-05-15 07:04:56.270540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.270565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.270579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.270591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.270618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.138 [2024-05-15 07:04:56.280356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.138 [2024-05-15 07:04:56.280531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.280556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.280577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.280589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.280616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.138 [2024-05-15 07:04:56.290412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.138 [2024-05-15 07:04:56.290586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.290611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.290625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.290637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.290664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.138 [2024-05-15 07:04:56.300423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.138 [2024-05-15 07:04:56.300600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.300625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.300640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.300651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.300678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.138 [2024-05-15 07:04:56.310426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.138 [2024-05-15 07:04:56.310603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.310628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.310642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.310654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.310682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.138 [2024-05-15 07:04:56.320442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.138 [2024-05-15 07:04:56.320612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.320637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.320652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.320663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.320690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.138 [2024-05-15 07:04:56.330483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.138 [2024-05-15 07:04:56.330657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.330682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.330697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.330709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.330736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.138 [2024-05-15 07:04:56.340502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.138 [2024-05-15 07:04:56.340674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.340700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.340714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.340726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.340753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.138 [2024-05-15 07:04:56.350563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.138 [2024-05-15 07:04:56.350743] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.350767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.350781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.350793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.350820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.138 [2024-05-15 07:04:56.360536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.138 [2024-05-15 07:04:56.360706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.138 [2024-05-15 07:04:56.360731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.138 [2024-05-15 07:04:56.360745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.138 [2024-05-15 07:04:56.360757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.138 [2024-05-15 07:04:56.360784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.138 qpair failed and we were unable to recover it. 00:27:42.398 [2024-05-15 07:04:56.370585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.398 [2024-05-15 07:04:56.370762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.398 [2024-05-15 07:04:56.370788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.398 [2024-05-15 07:04:56.370808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.398 [2024-05-15 07:04:56.370822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.398 [2024-05-15 07:04:56.370849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.398 qpair failed and we were unable to recover it. 00:27:42.398 [2024-05-15 07:04:56.380611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.398 [2024-05-15 07:04:56.380782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.398 [2024-05-15 07:04:56.380808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.380822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.380834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.380861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.390633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.390806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.390832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.390846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.390857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.390884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.400709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.400892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.400918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.400939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.400953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.400980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.410687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.410879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.410905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.410919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.410937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.410966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.420751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.420922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.420955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.420970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.420982] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.421009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.430780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.430985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.431011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.431025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.431037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.431064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.440814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.441016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.441043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.441057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.441069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.441095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.450828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.451071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.451096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.451110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.451122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.451148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.460854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.461033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.461058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.461079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.461091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.461118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.470898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.471080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.471105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.471119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.471131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.471158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.480921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.481107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.481131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.481146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.481158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.481184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.490951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.491116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.491141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.491155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.491167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.491194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.500989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.501165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.501190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.501204] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.501216] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.501243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.511009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.399 [2024-05-15 07:04:56.511183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.399 [2024-05-15 07:04:56.511208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.399 [2024-05-15 07:04:56.511222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.399 [2024-05-15 07:04:56.511234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.399 [2024-05-15 07:04:56.511261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.399 qpair failed and we were unable to recover it. 00:27:42.399 [2024-05-15 07:04:56.521036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.521206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.521231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.521246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.521258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.521284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.400 [2024-05-15 07:04:56.531093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.531314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.531339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.531353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.531365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.531392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.400 [2024-05-15 07:04:56.541098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.541267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.541293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.541307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.541319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.541346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.400 [2024-05-15 07:04:56.551150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.551325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.551349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.551369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.551381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.551408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.400 [2024-05-15 07:04:56.561180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.561352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.561378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.561392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.561404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.561431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.400 [2024-05-15 07:04:56.571211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.571392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.571417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.571432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.571444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.571471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.400 [2024-05-15 07:04:56.581205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.581377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.581402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.581417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.581428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.581455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.400 [2024-05-15 07:04:56.591255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.591426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.591452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.591467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.591479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.591506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.400 [2024-05-15 07:04:56.601305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.601478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.601503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.601518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.601530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.601557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.400 [2024-05-15 07:04:56.611266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.611436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.611462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.611476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.611488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.611516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.400 [2024-05-15 07:04:56.621300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.621470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.621496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.621510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.621522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.621549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.400 [2024-05-15 07:04:56.631356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.400 [2024-05-15 07:04:56.631538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.400 [2024-05-15 07:04:56.631564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.400 [2024-05-15 07:04:56.631583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.400 [2024-05-15 07:04:56.631595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.400 [2024-05-15 07:04:56.631623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.400 qpair failed and we were unable to recover it. 00:27:42.661 [2024-05-15 07:04:56.641417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.641590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.641621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.641637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.641649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.641676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.651457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.651631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.651656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.651670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.651683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.651710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.661487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.661659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.661684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.661699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.661710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.661738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.671497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.671681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.671707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.671722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.671734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.671760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.681513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.681693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.681718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.681733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.681745] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.681772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.691555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.691738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.691764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.691778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.691790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.691817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.701653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.701828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.701853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.701868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.701879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.701906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.711619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.711803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.711828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.711843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.711855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.711881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.721645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.721827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.721853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.721867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.721878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.721905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.731646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.731821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.731850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.731865] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.731877] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.731904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.741759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.741938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.741964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.741979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.741991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.742018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.751845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.752081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.752106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.752121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.752133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.752160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.761739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.761909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.761944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.761961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.761973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.662 [2024-05-15 07:04:56.762000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.662 qpair failed and we were unable to recover it. 00:27:42.662 [2024-05-15 07:04:56.771797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.662 [2024-05-15 07:04:56.772007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.662 [2024-05-15 07:04:56.772032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.662 [2024-05-15 07:04:56.772047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.662 [2024-05-15 07:04:56.772059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.772095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.781851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.782034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.782060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.782074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.782086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.782113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.791864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.792046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.792071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.792085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.792097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.792124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.801883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.802060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.802085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.802100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.802111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.802139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.811966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.812151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.812176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.812191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.812203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.812230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.821903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.822086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.822117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.822132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.822143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.822170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.831981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.832174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.832199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.832213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.832224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.832251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.841967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.842188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.842213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.842227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.842239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.842266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.852009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.852182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.852207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.852221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.852233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.852260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.862070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.862253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.862280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.862298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.862310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.862344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.872103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.872285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.872312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.872329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.872341] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.872368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.882102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.882271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.882297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.882311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.882323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.882351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.663 [2024-05-15 07:04:56.892117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.663 [2024-05-15 07:04:56.892298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.663 [2024-05-15 07:04:56.892323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.663 [2024-05-15 07:04:56.892338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.663 [2024-05-15 07:04:56.892349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.663 [2024-05-15 07:04:56.892377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.663 qpair failed and we were unable to recover it. 00:27:42.925 [2024-05-15 07:04:56.902143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.925 [2024-05-15 07:04:56.902317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.925 [2024-05-15 07:04:56.902343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.925 [2024-05-15 07:04:56.902358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.925 [2024-05-15 07:04:56.902370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.925 [2024-05-15 07:04:56.902396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.925 qpair failed and we were unable to recover it. 00:27:42.925 [2024-05-15 07:04:56.912208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.925 [2024-05-15 07:04:56.912396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.925 [2024-05-15 07:04:56.912428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.925 [2024-05-15 07:04:56.912443] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.925 [2024-05-15 07:04:56.912455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.925 [2024-05-15 07:04:56.912483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.925 qpair failed and we were unable to recover it. 00:27:42.925 [2024-05-15 07:04:56.922259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.925 [2024-05-15 07:04:56.922479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.925 [2024-05-15 07:04:56.922505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.925 [2024-05-15 07:04:56.922519] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.925 [2024-05-15 07:04:56.922531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.925 [2024-05-15 07:04:56.922557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.925 qpair failed and we were unable to recover it. 00:27:42.925 [2024-05-15 07:04:56.932263] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.925 [2024-05-15 07:04:56.932481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.925 [2024-05-15 07:04:56.932509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.925 [2024-05-15 07:04:56.932524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.925 [2024-05-15 07:04:56.932536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.925 [2024-05-15 07:04:56.932564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.925 qpair failed and we were unable to recover it. 00:27:42.925 [2024-05-15 07:04:56.942286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.925 [2024-05-15 07:04:56.942474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.925 [2024-05-15 07:04:56.942500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.925 [2024-05-15 07:04:56.942523] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.925 [2024-05-15 07:04:56.942538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.925 [2024-05-15 07:04:56.942566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.925 qpair failed and we were unable to recover it. 00:27:42.925 [2024-05-15 07:04:56.952331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.925 [2024-05-15 07:04:56.952554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.925 [2024-05-15 07:04:56.952580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.925 [2024-05-15 07:04:56.952595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.925 [2024-05-15 07:04:56.952606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.925 [2024-05-15 07:04:56.952639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.925 qpair failed and we were unable to recover it. 00:27:42.925 [2024-05-15 07:04:56.962339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.925 [2024-05-15 07:04:56.962516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.925 [2024-05-15 07:04:56.962541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.925 [2024-05-15 07:04:56.962554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.925 [2024-05-15 07:04:56.962566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.925 [2024-05-15 07:04:56.962594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.925 qpair failed and we were unable to recover it. 00:27:42.925 [2024-05-15 07:04:56.972401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.925 [2024-05-15 07:04:56.972577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.925 [2024-05-15 07:04:56.972602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.925 [2024-05-15 07:04:56.972616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.925 [2024-05-15 07:04:56.972628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.925 [2024-05-15 07:04:56.972655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.925 qpair failed and we were unable to recover it. 00:27:42.925 [2024-05-15 07:04:56.982430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:56.982607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:56.982633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:56.982647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:56.982658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:56.982686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:56.992462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:56.992636] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:56.992661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:56.992676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:56.992688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:56.992714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.002463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.002653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.002684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.002699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.002711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.002738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.012483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.012653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.012679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.012693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.012705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.012732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.022578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.022783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.022809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.022824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.022835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.022862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.032543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.032718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.032743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.032757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.032769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.032796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.042565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.042739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.042765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.042779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.042797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.042826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.052603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.052781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.052806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.052821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.052833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.052859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.062633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.062804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.062829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.062844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.062856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.062882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.072763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.072945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.072970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.072984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.072996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.073023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.082677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.082854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.082880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.082894] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.082906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.082939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.092800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.092981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.093011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.093027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.093038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.093066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.102741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.102916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.102948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.102964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.102976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.103005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.926 [2024-05-15 07:04:57.112793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.926 [2024-05-15 07:04:57.112985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.926 [2024-05-15 07:04:57.113011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.926 [2024-05-15 07:04:57.113025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.926 [2024-05-15 07:04:57.113037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.926 [2024-05-15 07:04:57.113064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.926 qpair failed and we were unable to recover it. 00:27:42.927 [2024-05-15 07:04:57.122793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.927 [2024-05-15 07:04:57.122968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.927 [2024-05-15 07:04:57.122993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.927 [2024-05-15 07:04:57.123008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.927 [2024-05-15 07:04:57.123020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.927 [2024-05-15 07:04:57.123047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.927 qpair failed and we were unable to recover it. 00:27:42.927 [2024-05-15 07:04:57.132807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.927 [2024-05-15 07:04:57.132977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.927 [2024-05-15 07:04:57.133002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.927 [2024-05-15 07:04:57.133016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.927 [2024-05-15 07:04:57.133034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.927 [2024-05-15 07:04:57.133062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.927 qpair failed and we were unable to recover it. 00:27:42.927 [2024-05-15 07:04:57.142897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.927 [2024-05-15 07:04:57.143072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.927 [2024-05-15 07:04:57.143097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.927 [2024-05-15 07:04:57.143112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.927 [2024-05-15 07:04:57.143123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.927 [2024-05-15 07:04:57.143150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.927 qpair failed and we were unable to recover it. 00:27:42.927 [2024-05-15 07:04:57.152892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:42.927 [2024-05-15 07:04:57.153116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:42.927 [2024-05-15 07:04:57.153142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:42.927 [2024-05-15 07:04:57.153156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:42.927 [2024-05-15 07:04:57.153168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:42.927 [2024-05-15 07:04:57.153195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:42.927 qpair failed and we were unable to recover it. 00:27:43.186 [2024-05-15 07:04:57.162953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.186 [2024-05-15 07:04:57.163155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.186 [2024-05-15 07:04:57.163181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.186 [2024-05-15 07:04:57.163195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.186 [2024-05-15 07:04:57.163208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:43.186 [2024-05-15 07:04:57.163235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:43.186 qpair failed and we were unable to recover it. 00:27:43.186 [2024-05-15 07:04:57.172960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.186 [2024-05-15 07:04:57.173140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.186 [2024-05-15 07:04:57.173166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.186 [2024-05-15 07:04:57.173189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.186 [2024-05-15 07:04:57.173200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:43.186 [2024-05-15 07:04:57.173229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:43.186 qpair failed and we were unable to recover it. 00:27:43.186 [2024-05-15 07:04:57.182967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.186 [2024-05-15 07:04:57.183142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.186 [2024-05-15 07:04:57.183167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.186 [2024-05-15 07:04:57.183182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.186 [2024-05-15 07:04:57.183194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:43.186 [2024-05-15 07:04:57.183221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:43.186 qpair failed and we were unable to recover it. 00:27:43.186 [2024-05-15 07:04:57.193022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.186 [2024-05-15 07:04:57.193195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.186 [2024-05-15 07:04:57.193220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.186 [2024-05-15 07:04:57.193234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.186 [2024-05-15 07:04:57.193246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:43.186 [2024-05-15 07:04:57.193272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:43.186 qpair failed and we were unable to recover it. 00:27:43.186 [2024-05-15 07:04:57.203083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:43.186 [2024-05-15 07:04:57.203260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:43.186 [2024-05-15 07:04:57.203285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:43.186 [2024-05-15 07:04:57.203300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:43.186 [2024-05-15 07:04:57.203311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24809f0 00:27:43.186 [2024-05-15 07:04:57.203338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:43.186 qpair failed and we were unable to recover it. 00:27:43.186 [2024-05-15 07:04:57.203438] nvme_ctrlr.c:4325:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:43.186 A controller has encountered a failure and is being reset. 00:27:43.186 Controller properly reset. 00:27:43.186 Initializing NVMe Controllers 00:27:43.186 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:43.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:43.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:43.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:43.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:43.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:43.186 Initialization complete. Launching workers. 00:27:43.186 Starting thread on core 1 00:27:43.186 Starting thread on core 2 00:27:43.186 Starting thread on core 3 00:27:43.186 Starting thread on core 0 00:27:43.186 07:04:57 -- host/target_disconnect.sh@59 -- # sync 00:27:43.186 00:27:43.186 real 0m11.563s 00:27:43.186 user 0m18.450s 00:27:43.186 sys 0m5.849s 00:27:43.186 07:04:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.186 07:04:57 -- common/autotest_common.sh@10 -- # set +x 00:27:43.186 ************************************ 00:27:43.186 END TEST nvmf_target_disconnect_tc2 00:27:43.186 ************************************ 00:27:43.186 07:04:57 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:27:43.186 07:04:57 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:43.186 07:04:57 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:27:43.186 07:04:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:43.186 07:04:57 -- nvmf/common.sh@116 -- # sync 00:27:43.186 07:04:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:43.186 07:04:57 -- nvmf/common.sh@119 -- # set +e 00:27:43.186 07:04:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:43.186 07:04:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:43.186 rmmod nvme_tcp 00:27:43.186 rmmod nvme_fabrics 00:27:43.444 rmmod nvme_keyring 00:27:43.444 07:04:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:43.444 07:04:57 -- nvmf/common.sh@123 -- # set -e 00:27:43.444 07:04:57 -- nvmf/common.sh@124 -- # return 0 00:27:43.444 07:04:57 -- nvmf/common.sh@477 -- # '[' -n 629336 ']' 00:27:43.444 07:04:57 -- nvmf/common.sh@478 -- # killprocess 629336 00:27:43.444 07:04:57 -- common/autotest_common.sh@926 -- # '[' -z 629336 ']' 00:27:43.444 07:04:57 -- common/autotest_common.sh@930 -- # kill -0 629336 00:27:43.444 07:04:57 -- common/autotest_common.sh@931 -- # uname 00:27:43.444 07:04:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:43.444 07:04:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 629336 00:27:43.444 07:04:57 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:27:43.444 07:04:57 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:27:43.444 07:04:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 629336' 00:27:43.444 killing process with pid 629336 00:27:43.444 07:04:57 -- common/autotest_common.sh@945 -- # kill 629336 00:27:43.444 07:04:57 -- common/autotest_common.sh@950 -- # wait 629336 00:27:43.703 07:04:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:43.703 07:04:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:43.703 07:04:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:43.703 07:04:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:43.703 07:04:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:43.703 07:04:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.703 07:04:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.703 07:04:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:45.612 07:04:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:45.612 00:27:45.612 real 0m16.744s 00:27:45.612 user 0m44.985s 00:27:45.612 sys 0m8.192s 00:27:45.612 07:04:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.612 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:27:45.612 ************************************ 00:27:45.612 END TEST nvmf_target_disconnect 00:27:45.612 ************************************ 00:27:45.612 07:04:59 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:45.612 07:04:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:45.612 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:27:45.612 07:04:59 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:45.612 00:27:45.612 real 21m7.253s 00:27:45.612 user 58m42.319s 00:27:45.612 sys 5m5.702s 00:27:45.612 07:04:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.612 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:27:45.612 ************************************ 00:27:45.612 END TEST nvmf_tcp 00:27:45.612 ************************************ 00:27:45.612 07:04:59 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:27:45.612 07:04:59 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:45.612 07:04:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:45.612 07:04:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:45.612 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:27:45.612 ************************************ 00:27:45.612 START TEST spdkcli_nvmf_tcp 00:27:45.612 ************************************ 00:27:45.612 07:04:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:45.871 * Looking for test storage... 00:27:45.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:45.871 07:04:59 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:45.871 07:04:59 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:45.871 07:04:59 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:45.871 07:04:59 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.871 07:04:59 -- nvmf/common.sh@7 -- # uname -s 00:27:45.871 07:04:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.871 07:04:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.871 07:04:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.871 07:04:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.871 07:04:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.871 07:04:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.871 07:04:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.871 07:04:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.871 07:04:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.871 07:04:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.871 07:04:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:45.871 07:04:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:45.871 07:04:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.871 07:04:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.871 07:04:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.871 07:04:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.871 07:04:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.871 07:04:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.871 07:04:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.871 07:04:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.871 07:04:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.871 07:04:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.871 07:04:59 -- paths/export.sh@5 -- # export PATH 00:27:45.871 07:04:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.871 07:04:59 -- nvmf/common.sh@46 -- # : 0 00:27:45.871 07:04:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:45.871 07:04:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:45.871 07:04:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:45.871 07:04:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.871 07:04:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.871 07:04:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:45.871 07:04:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:45.871 07:04:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:45.871 07:04:59 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:45.871 07:04:59 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:45.871 07:04:59 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:45.871 07:04:59 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:45.871 07:04:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:45.871 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:27:45.871 07:04:59 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:45.871 07:04:59 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=630560 00:27:45.871 07:04:59 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:45.871 07:04:59 -- spdkcli/common.sh@34 -- # waitforlisten 630560 00:27:45.871 07:04:59 -- common/autotest_common.sh@819 -- # '[' -z 630560 ']' 00:27:45.871 07:04:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.871 07:04:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:45.871 07:04:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.871 07:04:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:45.871 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:27:45.871 [2024-05-15 07:04:59.951718] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:27:45.871 [2024-05-15 07:04:59.951806] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630560 ] 00:27:45.871 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.871 [2024-05-15 07:05:00.025095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:46.131 [2024-05-15 07:05:00.143718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:46.131 [2024-05-15 07:05:00.143976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.131 [2024-05-15 07:05:00.143976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.066 07:05:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:47.066 07:05:00 -- common/autotest_common.sh@852 -- # return 0 00:27:47.066 07:05:00 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:47.066 07:05:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:47.066 07:05:00 -- common/autotest_common.sh@10 -- # set +x 00:27:47.066 07:05:00 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:47.066 07:05:00 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:47.066 07:05:00 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:47.066 07:05:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:47.066 07:05:00 -- common/autotest_common.sh@10 -- # set +x 00:27:47.066 07:05:00 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:47.066 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:47.066 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:47.066 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:47.066 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:47.066 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:47.066 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:47.066 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:47.066 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:47.066 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:47.066 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:47.066 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:47.066 ' 00:27:47.325 [2024-05-15 07:05:01.383768] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:49.870 [2024-05-15 07:05:03.545122] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.808 [2024-05-15 07:05:04.793653] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:53.350 [2024-05-15 07:05:07.060995] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:55.258 [2024-05-15 07:05:09.035344] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:56.636 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:56.636 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:56.636 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:56.636 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:56.636 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:56.636 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:56.636 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:56.636 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:56.636 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:56.636 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:56.636 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:56.636 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:56.636 07:05:10 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:56.636 07:05:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:56.636 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:27:56.636 07:05:10 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:56.636 07:05:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:56.636 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:27:56.636 07:05:10 -- spdkcli/nvmf.sh@69 -- # check_match 00:27:56.636 07:05:10 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:56.895 07:05:11 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:57.152 07:05:11 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:57.153 07:05:11 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:57.153 07:05:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:57.153 07:05:11 -- common/autotest_common.sh@10 -- # set +x 00:27:57.153 07:05:11 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:57.153 07:05:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:57.153 07:05:11 -- common/autotest_common.sh@10 -- # set +x 00:27:57.153 07:05:11 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:57.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:57.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:57.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:57.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:57.153 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:57.153 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:57.153 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:57.153 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:57.153 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:57.153 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:57.153 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:57.153 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:57.153 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:57.153 ' 00:28:02.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:02.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:02.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:02.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:02.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:28:02.413 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:28:02.413 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:02.413 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:02.413 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:02.413 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:02.413 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:02.413 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:02.413 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:02.413 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:02.413 07:05:16 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:02.413 07:05:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:02.413 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:28:02.413 07:05:16 -- spdkcli/nvmf.sh@90 -- # killprocess 630560 00:28:02.413 07:05:16 -- common/autotest_common.sh@926 -- # '[' -z 630560 ']' 00:28:02.413 07:05:16 -- common/autotest_common.sh@930 -- # kill -0 630560 00:28:02.413 07:05:16 -- common/autotest_common.sh@931 -- # uname 00:28:02.413 07:05:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:02.413 07:05:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 630560 00:28:02.413 07:05:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:02.413 07:05:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:02.413 07:05:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 630560' 00:28:02.413 killing process with pid 630560 00:28:02.413 07:05:16 -- common/autotest_common.sh@945 -- # kill 630560 00:28:02.413 [2024-05-15 07:05:16.524162] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:02.413 07:05:16 -- common/autotest_common.sh@950 -- # wait 630560 00:28:02.671 07:05:16 -- spdkcli/nvmf.sh@1 -- # cleanup 00:28:02.671 07:05:16 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:28:02.671 07:05:16 -- spdkcli/common.sh@13 -- # '[' -n 630560 ']' 00:28:02.671 07:05:16 -- spdkcli/common.sh@14 -- # killprocess 630560 00:28:02.671 07:05:16 -- common/autotest_common.sh@926 -- # '[' -z 630560 ']' 00:28:02.671 07:05:16 -- common/autotest_common.sh@930 -- # kill -0 630560 00:28:02.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (630560) - No such process 00:28:02.671 07:05:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 630560 is not found' 00:28:02.671 Process with pid 630560 is not found 00:28:02.671 07:05:16 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:28:02.671 07:05:16 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:28:02.671 07:05:16 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:28:02.671 00:28:02.671 real 0m16.954s 00:28:02.671 user 0m35.987s 00:28:02.671 sys 0m0.898s 00:28:02.671 07:05:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.671 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:28:02.671 ************************************ 00:28:02.671 END TEST spdkcli_nvmf_tcp 00:28:02.671 ************************************ 00:28:02.671 07:05:16 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:02.671 07:05:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:02.671 07:05:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:02.671 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:28:02.671 ************************************ 00:28:02.671 START TEST nvmf_identify_passthru 00:28:02.671 ************************************ 00:28:02.671 07:05:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:28:02.671 * Looking for test storage... 00:28:02.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:02.671 07:05:16 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.671 07:05:16 -- nvmf/common.sh@7 -- # uname -s 00:28:02.671 07:05:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.671 07:05:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.671 07:05:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.671 07:05:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.671 07:05:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.671 07:05:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.671 07:05:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.671 07:05:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.671 07:05:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.671 07:05:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.671 07:05:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:02.671 07:05:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:02.671 07:05:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.671 07:05:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.671 07:05:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.671 07:05:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.671 07:05:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.671 07:05:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.671 07:05:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.671 07:05:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.671 07:05:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.672 07:05:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.672 07:05:16 -- paths/export.sh@5 -- # export PATH 00:28:02.672 07:05:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.672 07:05:16 -- nvmf/common.sh@46 -- # : 0 00:28:02.672 07:05:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:02.672 07:05:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:02.672 07:05:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:02.672 07:05:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.672 07:05:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.672 07:05:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:02.672 07:05:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:02.672 07:05:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:02.672 07:05:16 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.672 07:05:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.672 07:05:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.672 07:05:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.672 07:05:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.672 07:05:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.672 07:05:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.672 07:05:16 -- paths/export.sh@5 -- # export PATH 00:28:02.672 07:05:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.672 07:05:16 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:28:02.672 07:05:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:02.672 07:05:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.672 07:05:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:02.672 07:05:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:02.672 07:05:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:02.672 07:05:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.672 07:05:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:02.672 07:05:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.672 07:05:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:02.672 07:05:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:02.672 07:05:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:02.672 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:28:05.259 07:05:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:05.259 07:05:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:05.259 07:05:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:05.259 07:05:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:05.259 07:05:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:05.259 07:05:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:05.259 07:05:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:05.259 07:05:19 -- nvmf/common.sh@294 -- # net_devs=() 00:28:05.259 07:05:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:05.259 07:05:19 -- nvmf/common.sh@295 -- # e810=() 00:28:05.259 07:05:19 -- nvmf/common.sh@295 -- # local -ga e810 00:28:05.259 07:05:19 -- nvmf/common.sh@296 -- # x722=() 00:28:05.259 07:05:19 -- nvmf/common.sh@296 -- # local -ga x722 00:28:05.259 07:05:19 -- nvmf/common.sh@297 -- # mlx=() 00:28:05.259 07:05:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:05.259 07:05:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.259 07:05:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.259 07:05:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.259 07:05:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.259 07:05:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.259 07:05:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.259 07:05:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.259 07:05:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.259 07:05:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.259 07:05:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.259 07:05:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.259 07:05:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:05.259 07:05:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:05.259 07:05:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:05.259 07:05:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:05.259 07:05:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:05.259 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:05.259 07:05:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:05.259 07:05:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:05.259 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:05.259 07:05:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:05.259 07:05:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:05.259 07:05:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.259 07:05:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:05.259 07:05:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.259 07:05:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:05.259 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:05.259 07:05:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.259 07:05:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:05.259 07:05:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.259 07:05:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:05.259 07:05:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.259 07:05:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:05.259 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:05.259 07:05:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.259 07:05:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:05.259 07:05:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:05.259 07:05:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:05.259 07:05:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:05.259 07:05:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.259 07:05:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.259 07:05:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.259 07:05:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:05.259 07:05:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.259 07:05:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.259 07:05:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:05.259 07:05:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.259 07:05:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.260 07:05:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:05.260 07:05:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:05.260 07:05:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.260 07:05:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.260 07:05:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.260 07:05:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.260 07:05:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:05.260 07:05:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.260 07:05:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.260 07:05:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.260 07:05:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:05.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:28:05.260 00:28:05.260 --- 10.0.0.2 ping statistics --- 00:28:05.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.260 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:28:05.260 07:05:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:28:05.260 00:28:05.260 --- 10.0.0.1 ping statistics --- 00:28:05.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.260 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:28:05.260 07:05:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.260 07:05:19 -- nvmf/common.sh@410 -- # return 0 00:28:05.260 07:05:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:05.260 07:05:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.260 07:05:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:05.260 07:05:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:05.260 07:05:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.260 07:05:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:05.260 07:05:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:05.260 07:05:19 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:28:05.260 07:05:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:05.260 07:05:19 -- common/autotest_common.sh@10 -- # set +x 00:28:05.260 07:05:19 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:28:05.260 07:05:19 -- common/autotest_common.sh@1509 -- # bdfs=() 00:28:05.260 07:05:19 -- common/autotest_common.sh@1509 -- # local bdfs 00:28:05.260 07:05:19 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:28:05.260 07:05:19 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:28:05.260 07:05:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:05.260 07:05:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:05.260 07:05:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:05.260 07:05:19 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:05.260 07:05:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:05.519 07:05:19 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:05.519 07:05:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:28:05.519 07:05:19 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:28:05.519 07:05:19 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:28:05.519 07:05:19 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:28:05.519 07:05:19 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:28:05.519 07:05:19 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:28:05.519 07:05:19 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:05.519 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.703 07:05:23 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:28:09.703 07:05:23 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:28:09.703 07:05:23 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:09.703 07:05:23 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:09.703 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.889 07:05:27 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:13.889 07:05:27 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:13.889 07:05:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:13.889 07:05:27 -- common/autotest_common.sh@10 -- # set +x 00:28:13.889 07:05:27 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:13.889 07:05:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:13.889 07:05:27 -- common/autotest_common.sh@10 -- # set +x 00:28:13.889 07:05:27 -- target/identify_passthru.sh@31 -- # nvmfpid=635705 00:28:13.889 07:05:27 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:13.889 07:05:27 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:13.889 07:05:27 -- target/identify_passthru.sh@35 -- # waitforlisten 635705 00:28:13.889 07:05:27 -- common/autotest_common.sh@819 -- # '[' -z 635705 ']' 00:28:13.889 07:05:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.889 07:05:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:13.889 07:05:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.889 07:05:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:13.889 07:05:27 -- common/autotest_common.sh@10 -- # set +x 00:28:13.889 [2024-05-15 07:05:28.037050] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:13.889 [2024-05-15 07:05:28.037142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.889 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.889 [2024-05-15 07:05:28.118845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:14.147 [2024-05-15 07:05:28.234938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:14.147 [2024-05-15 07:05:28.235097] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.147 [2024-05-15 07:05:28.235115] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.147 [2024-05-15 07:05:28.235127] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.147 [2024-05-15 07:05:28.236955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.147 [2024-05-15 07:05:28.237003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.147 [2024-05-15 07:05:28.237088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.147 [2024-05-15 07:05:28.237092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.147 07:05:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:14.147 07:05:28 -- common/autotest_common.sh@852 -- # return 0 00:28:14.147 07:05:28 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:14.147 07:05:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.147 07:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:14.147 INFO: Log level set to 20 00:28:14.147 INFO: Requests: 00:28:14.147 { 00:28:14.147 "jsonrpc": "2.0", 00:28:14.147 "method": "nvmf_set_config", 00:28:14.147 "id": 1, 00:28:14.147 "params": { 00:28:14.147 "admin_cmd_passthru": { 00:28:14.147 "identify_ctrlr": true 00:28:14.147 } 00:28:14.147 } 00:28:14.147 } 00:28:14.147 00:28:14.147 INFO: response: 00:28:14.147 { 00:28:14.147 "jsonrpc": "2.0", 00:28:14.147 "id": 1, 00:28:14.147 "result": true 00:28:14.147 } 00:28:14.147 00:28:14.147 07:05:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.147 07:05:28 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:14.147 07:05:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.147 07:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:14.147 INFO: Setting log level to 20 00:28:14.147 INFO: Setting log level to 20 00:28:14.147 INFO: Log level set to 20 00:28:14.147 INFO: Log level set to 20 00:28:14.147 INFO: Requests: 00:28:14.147 { 00:28:14.147 "jsonrpc": "2.0", 00:28:14.147 "method": "framework_start_init", 00:28:14.147 "id": 1 00:28:14.147 } 00:28:14.147 00:28:14.147 INFO: Requests: 00:28:14.147 { 00:28:14.147 "jsonrpc": "2.0", 00:28:14.147 "method": "framework_start_init", 00:28:14.147 "id": 1 00:28:14.147 } 00:28:14.147 00:28:14.147 [2024-05-15 07:05:28.375134] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:14.147 INFO: response: 00:28:14.147 { 00:28:14.147 "jsonrpc": "2.0", 00:28:14.147 "id": 1, 00:28:14.147 "result": true 00:28:14.147 } 00:28:14.147 00:28:14.147 INFO: response: 00:28:14.147 { 00:28:14.147 "jsonrpc": "2.0", 00:28:14.147 "id": 1, 00:28:14.147 "result": true 00:28:14.147 } 00:28:14.147 00:28:14.147 07:05:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.147 07:05:28 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:14.405 07:05:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.405 07:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:14.405 INFO: Setting log level to 40 00:28:14.405 INFO: Setting log level to 40 00:28:14.405 INFO: Setting log level to 40 00:28:14.406 [2024-05-15 07:05:28.385105] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.406 07:05:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.406 07:05:28 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:14.406 07:05:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:14.406 07:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:14.406 07:05:28 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:28:14.406 07:05:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.406 07:05:28 -- common/autotest_common.sh@10 -- # set +x 00:28:17.687 Nvme0n1 00:28:17.687 07:05:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.687 07:05:31 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:17.687 07:05:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.687 07:05:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.687 07:05:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.687 07:05:31 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:17.687 07:05:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.687 07:05:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.687 07:05:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.687 07:05:31 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.687 07:05:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.687 07:05:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.687 [2024-05-15 07:05:31.278755] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.687 07:05:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.687 07:05:31 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:17.687 07:05:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.687 07:05:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.687 [2024-05-15 07:05:31.286502] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:28:17.687 [ 00:28:17.687 { 00:28:17.687 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:17.687 "subtype": "Discovery", 00:28:17.687 "listen_addresses": [], 00:28:17.687 "allow_any_host": true, 00:28:17.687 "hosts": [] 00:28:17.687 }, 00:28:17.687 { 00:28:17.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.687 "subtype": "NVMe", 00:28:17.687 "listen_addresses": [ 00:28:17.687 { 00:28:17.687 "transport": "TCP", 00:28:17.687 "trtype": "TCP", 00:28:17.687 "adrfam": "IPv4", 00:28:17.687 "traddr": "10.0.0.2", 00:28:17.687 "trsvcid": "4420" 00:28:17.687 } 00:28:17.687 ], 00:28:17.687 "allow_any_host": true, 00:28:17.687 "hosts": [], 00:28:17.687 "serial_number": "SPDK00000000000001", 00:28:17.687 "model_number": "SPDK bdev Controller", 00:28:17.687 "max_namespaces": 1, 00:28:17.687 "min_cntlid": 1, 00:28:17.687 "max_cntlid": 65519, 00:28:17.687 "namespaces": [ 00:28:17.687 { 00:28:17.687 "nsid": 1, 00:28:17.687 "bdev_name": "Nvme0n1", 00:28:17.687 "name": "Nvme0n1", 00:28:17.687 "nguid": "7AB08F99BC5643BCAADB1437E81D59AA", 00:28:17.687 "uuid": "7ab08f99-bc56-43bc-aadb-1437e81d59aa" 00:28:17.687 } 00:28:17.687 ] 00:28:17.687 } 00:28:17.687 ] 00:28:17.687 07:05:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.687 07:05:31 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:17.687 07:05:31 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:17.687 07:05:31 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:17.687 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.687 07:05:31 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:28:17.687 07:05:31 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:17.687 07:05:31 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:17.687 07:05:31 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:17.687 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.687 07:05:31 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:17.687 07:05:31 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:28:17.687 07:05:31 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:17.687 07:05:31 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:17.687 07:05:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.687 07:05:31 -- common/autotest_common.sh@10 -- # set +x 00:28:17.687 07:05:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.687 07:05:31 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:17.687 07:05:31 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:17.687 07:05:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:17.687 07:05:31 -- nvmf/common.sh@116 -- # sync 00:28:17.687 07:05:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:17.687 07:05:31 -- nvmf/common.sh@119 -- # set +e 00:28:17.687 07:05:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:17.687 07:05:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:17.687 rmmod nvme_tcp 00:28:17.687 rmmod nvme_fabrics 00:28:17.687 rmmod nvme_keyring 00:28:17.687 07:05:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:17.687 07:05:31 -- nvmf/common.sh@123 -- # set -e 00:28:17.687 07:05:31 -- nvmf/common.sh@124 -- # return 0 00:28:17.687 07:05:31 -- nvmf/common.sh@477 -- # '[' -n 635705 ']' 00:28:17.687 07:05:31 -- nvmf/common.sh@478 -- # killprocess 635705 00:28:17.687 07:05:31 -- common/autotest_common.sh@926 -- # '[' -z 635705 ']' 00:28:17.687 07:05:31 -- common/autotest_common.sh@930 -- # kill -0 635705 00:28:17.687 07:05:31 -- common/autotest_common.sh@931 -- # uname 00:28:17.687 07:05:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:17.687 07:05:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 635705 00:28:17.687 07:05:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:17.687 07:05:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:17.687 07:05:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 635705' 00:28:17.687 killing process with pid 635705 00:28:17.687 07:05:31 -- common/autotest_common.sh@945 -- # kill 635705 00:28:17.687 [2024-05-15 07:05:31.806703] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:28:17.687 07:05:31 -- common/autotest_common.sh@950 -- # wait 635705 00:28:19.590 07:05:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:19.590 07:05:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:19.590 07:05:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:19.590 07:05:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:19.590 07:05:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:19.590 07:05:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.590 07:05:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:19.590 07:05:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.493 07:05:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:21.493 00:28:21.493 real 0m18.616s 00:28:21.493 user 0m27.138s 00:28:21.493 sys 0m2.656s 00:28:21.493 07:05:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:21.493 07:05:35 -- common/autotest_common.sh@10 -- # set +x 00:28:21.493 ************************************ 00:28:21.493 END TEST nvmf_identify_passthru 00:28:21.493 ************************************ 00:28:21.493 07:05:35 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:21.493 07:05:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:21.493 07:05:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:21.493 07:05:35 -- common/autotest_common.sh@10 -- # set +x 00:28:21.493 ************************************ 00:28:21.493 START TEST nvmf_dif 00:28:21.493 ************************************ 00:28:21.493 07:05:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:21.493 * Looking for test storage... 00:28:21.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:21.493 07:05:35 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.493 07:05:35 -- nvmf/common.sh@7 -- # uname -s 00:28:21.493 07:05:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.493 07:05:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.493 07:05:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.493 07:05:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.493 07:05:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.493 07:05:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.493 07:05:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.493 07:05:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.493 07:05:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.493 07:05:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.493 07:05:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:21.493 07:05:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:21.493 07:05:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.493 07:05:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.493 07:05:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.493 07:05:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.493 07:05:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.493 07:05:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.493 07:05:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.493 07:05:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.493 07:05:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.493 07:05:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.493 07:05:35 -- paths/export.sh@5 -- # export PATH 00:28:21.493 07:05:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.493 07:05:35 -- nvmf/common.sh@46 -- # : 0 00:28:21.493 07:05:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:21.493 07:05:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:21.493 07:05:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:21.493 07:05:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.493 07:05:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.493 07:05:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:21.493 07:05:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:21.493 07:05:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:21.493 07:05:35 -- target/dif.sh@15 -- # NULL_META=16 00:28:21.493 07:05:35 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:21.493 07:05:35 -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:21.493 07:05:35 -- target/dif.sh@15 -- # NULL_DIF=1 00:28:21.493 07:05:35 -- target/dif.sh@135 -- # nvmftestinit 00:28:21.493 07:05:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:21.493 07:05:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.493 07:05:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:21.493 07:05:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:21.493 07:05:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:21.493 07:05:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.493 07:05:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:21.493 07:05:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.493 07:05:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:21.493 07:05:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:21.493 07:05:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:21.493 07:05:35 -- common/autotest_common.sh@10 -- # set +x 00:28:24.021 07:05:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:24.021 07:05:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:24.021 07:05:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:24.021 07:05:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:24.021 07:05:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:24.021 07:05:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:24.021 07:05:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:24.021 07:05:37 -- nvmf/common.sh@294 -- # net_devs=() 00:28:24.021 07:05:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:24.021 07:05:37 -- nvmf/common.sh@295 -- # e810=() 00:28:24.021 07:05:37 -- nvmf/common.sh@295 -- # local -ga e810 00:28:24.021 07:05:37 -- nvmf/common.sh@296 -- # x722=() 00:28:24.021 07:05:37 -- nvmf/common.sh@296 -- # local -ga x722 00:28:24.021 07:05:37 -- nvmf/common.sh@297 -- # mlx=() 00:28:24.021 07:05:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:24.021 07:05:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.021 07:05:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.021 07:05:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.021 07:05:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.021 07:05:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.021 07:05:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.021 07:05:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.021 07:05:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.021 07:05:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.021 07:05:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.021 07:05:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.021 07:05:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:24.021 07:05:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:24.021 07:05:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:24.021 07:05:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:24.021 07:05:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:24.021 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:24.021 07:05:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:24.021 07:05:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:24.021 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:24.021 07:05:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:24.021 07:05:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:24.021 07:05:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.021 07:05:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:24.021 07:05:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.021 07:05:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:24.021 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:24.021 07:05:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.021 07:05:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:24.021 07:05:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.021 07:05:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:24.021 07:05:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.021 07:05:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:24.021 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:24.021 07:05:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.021 07:05:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:24.021 07:05:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:24.021 07:05:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:24.021 07:05:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:24.021 07:05:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.021 07:05:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.021 07:05:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.021 07:05:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:24.021 07:05:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.021 07:05:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.021 07:05:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:24.021 07:05:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.021 07:05:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.021 07:05:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:24.021 07:05:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:24.021 07:05:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.021 07:05:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.021 07:05:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.021 07:05:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.021 07:05:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:24.021 07:05:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.021 07:05:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.021 07:05:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.021 07:05:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:24.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:28:24.021 00:28:24.021 --- 10.0.0.2 ping statistics --- 00:28:24.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.021 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:28:24.021 07:05:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:28:24.021 00:28:24.021 --- 10.0.0.1 ping statistics --- 00:28:24.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.021 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:28:24.021 07:05:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.021 07:05:38 -- nvmf/common.sh@410 -- # return 0 00:28:24.021 07:05:38 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:28:24.021 07:05:38 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:25.396 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:25.396 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:25.396 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:25.396 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:25.396 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:25.396 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:25.396 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:25.396 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:25.396 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:25.396 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:25.396 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:25.396 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:25.396 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:25.396 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:25.396 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:25.396 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:25.396 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:25.396 07:05:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:25.396 07:05:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:25.396 07:05:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:25.396 07:05:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:25.396 07:05:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:25.396 07:05:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:25.396 07:05:39 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:25.396 07:05:39 -- target/dif.sh@137 -- # nvmfappstart 00:28:25.654 07:05:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:25.654 07:05:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:25.654 07:05:39 -- common/autotest_common.sh@10 -- # set +x 00:28:25.654 07:05:39 -- nvmf/common.sh@469 -- # nvmfpid=639408 00:28:25.654 07:05:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:25.654 07:05:39 -- nvmf/common.sh@470 -- # waitforlisten 639408 00:28:25.654 07:05:39 -- common/autotest_common.sh@819 -- # '[' -z 639408 ']' 00:28:25.654 07:05:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:25.654 07:05:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:25.654 07:05:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:25.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:25.654 07:05:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:25.654 07:05:39 -- common/autotest_common.sh@10 -- # set +x 00:28:25.654 [2024-05-15 07:05:39.672781] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:28:25.654 [2024-05-15 07:05:39.672851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.654 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.654 [2024-05-15 07:05:39.748599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.654 [2024-05-15 07:05:39.856042] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:25.654 [2024-05-15 07:05:39.856205] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:25.654 [2024-05-15 07:05:39.856232] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:25.655 [2024-05-15 07:05:39.856245] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:25.655 [2024-05-15 07:05:39.856297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.611 07:05:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:26.611 07:05:40 -- common/autotest_common.sh@852 -- # return 0 00:28:26.611 07:05:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:26.611 07:05:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:26.611 07:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:26.611 07:05:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.611 07:05:40 -- target/dif.sh@139 -- # create_transport 00:28:26.611 07:05:40 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:26.611 07:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.611 07:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:26.611 [2024-05-15 07:05:40.703619] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.611 07:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.611 07:05:40 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:26.611 07:05:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:26.611 07:05:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:26.611 07:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:26.611 ************************************ 00:28:26.611 START TEST fio_dif_1_default 00:28:26.611 ************************************ 00:28:26.611 07:05:40 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:28:26.611 07:05:40 -- target/dif.sh@86 -- # create_subsystems 0 00:28:26.611 07:05:40 -- target/dif.sh@28 -- # local sub 00:28:26.611 07:05:40 -- target/dif.sh@30 -- # for sub in "$@" 00:28:26.611 07:05:40 -- target/dif.sh@31 -- # create_subsystem 0 00:28:26.611 07:05:40 -- target/dif.sh@18 -- # local sub_id=0 00:28:26.611 07:05:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:26.611 07:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.611 07:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:26.611 bdev_null0 00:28:26.611 07:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.611 07:05:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:26.611 07:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.611 07:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:26.611 07:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.611 07:05:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:26.611 07:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.611 07:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:26.611 07:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.611 07:05:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:26.611 07:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:26.611 07:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:26.611 [2024-05-15 07:05:40.739846] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:26.611 07:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:26.611 07:05:40 -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:26.611 07:05:40 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:26.611 07:05:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:26.611 07:05:40 -- nvmf/common.sh@520 -- # config=() 00:28:26.611 07:05:40 -- nvmf/common.sh@520 -- # local subsystem config 00:28:26.611 07:05:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:26.611 07:05:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.611 07:05:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:26.611 { 00:28:26.611 "params": { 00:28:26.611 "name": "Nvme$subsystem", 00:28:26.611 "trtype": "$TEST_TRANSPORT", 00:28:26.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:26.611 "adrfam": "ipv4", 00:28:26.611 "trsvcid": "$NVMF_PORT", 00:28:26.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:26.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:26.611 "hdgst": ${hdgst:-false}, 00:28:26.611 "ddgst": ${ddgst:-false} 00:28:26.611 }, 00:28:26.611 "method": "bdev_nvme_attach_controller" 00:28:26.611 } 00:28:26.611 EOF 00:28:26.611 )") 00:28:26.611 07:05:40 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.611 07:05:40 -- target/dif.sh@82 -- # gen_fio_conf 00:28:26.611 07:05:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:26.611 07:05:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:26.611 07:05:40 -- target/dif.sh@54 -- # local file 00:28:26.611 07:05:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:26.611 07:05:40 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.611 07:05:40 -- target/dif.sh@56 -- # cat 00:28:26.611 07:05:40 -- common/autotest_common.sh@1320 -- # shift 00:28:26.611 07:05:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:26.612 07:05:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.612 07:05:40 -- nvmf/common.sh@542 -- # cat 00:28:26.612 07:05:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.612 07:05:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:26.612 07:05:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:26.612 07:05:40 -- target/dif.sh@72 -- # (( file <= files )) 00:28:26.612 07:05:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:26.612 07:05:40 -- nvmf/common.sh@544 -- # jq . 00:28:26.612 07:05:40 -- nvmf/common.sh@545 -- # IFS=, 00:28:26.612 07:05:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:26.612 "params": { 00:28:26.612 "name": "Nvme0", 00:28:26.612 "trtype": "tcp", 00:28:26.612 "traddr": "10.0.0.2", 00:28:26.612 "adrfam": "ipv4", 00:28:26.612 "trsvcid": "4420", 00:28:26.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:26.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:26.612 "hdgst": false, 00:28:26.612 "ddgst": false 00:28:26.612 }, 00:28:26.612 "method": "bdev_nvme_attach_controller" 00:28:26.612 }' 00:28:26.612 07:05:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:26.612 07:05:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:26.612 07:05:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:26.612 07:05:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:26.612 07:05:40 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:26.612 07:05:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:26.612 07:05:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:26.612 07:05:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:26.612 07:05:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:26.612 07:05:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:26.869 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:26.869 fio-3.35 00:28:26.869 Starting 1 thread 00:28:26.869 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.432 [2024-05-15 07:05:41.377645] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:27.432 [2024-05-15 07:05:41.377713] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:37.394 00:28:37.394 filename0: (groupid=0, jobs=1): err= 0: pid=639774: Wed May 15 07:05:51 2024 00:28:37.394 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10019msec) 00:28:37.394 slat (nsec): min=4388, max=66220, avg=8494.94, stdev=3568.93 00:28:37.394 clat (usec): min=960, max=45216, avg=21564.99, stdev=20433.34 00:28:37.394 lat (usec): min=967, max=45265, avg=21573.49, stdev=20432.96 00:28:37.394 clat percentiles (usec): 00:28:37.394 | 1.00th=[ 988], 5.00th=[ 1020], 10.00th=[ 1029], 20.00th=[ 1037], 00:28:37.394 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[41157], 60.00th=[41681], 00:28:37.394 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:28:37.394 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:28:37.394 | 99.99th=[45351] 00:28:37.394 bw ( KiB/s): min= 672, max= 768, per=99.87%, avg=740.80, stdev=33.28, samples=20 00:28:37.394 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:28:37.394 lat (usec) : 1000=1.83% 00:28:37.394 lat (msec) : 2=47.95%, 50=50.22% 00:28:37.394 cpu : usr=90.33%, sys=9.39%, ctx=23, majf=0, minf=283 00:28:37.394 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.394 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.394 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:37.394 00:28:37.394 Run status group 0 (all jobs): 00:28:37.394 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10019-10019msec 00:28:37.653 07:05:51 -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:37.653 07:05:51 -- target/dif.sh@43 -- # local sub 00:28:37.653 07:05:51 -- target/dif.sh@45 -- # for sub in "$@" 00:28:37.653 07:05:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:37.653 07:05:51 -- target/dif.sh@36 -- # local sub_id=0 00:28:37.653 07:05:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:37.653 07:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 07:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.653 07:05:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:37.653 07:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 07:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.653 00:28:37.653 real 0m11.032s 00:28:37.653 user 0m10.070s 00:28:37.653 sys 0m1.196s 00:28:37.653 07:05:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 ************************************ 00:28:37.653 END TEST fio_dif_1_default 00:28:37.653 ************************************ 00:28:37.653 07:05:51 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:37.653 07:05:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:37.653 07:05:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 ************************************ 00:28:37.653 START TEST fio_dif_1_multi_subsystems 00:28:37.653 ************************************ 00:28:37.653 07:05:51 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:28:37.653 07:05:51 -- target/dif.sh@92 -- # local files=1 00:28:37.653 07:05:51 -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:37.653 07:05:51 -- target/dif.sh@28 -- # local sub 00:28:37.653 07:05:51 -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.653 07:05:51 -- target/dif.sh@31 -- # create_subsystem 0 00:28:37.653 07:05:51 -- target/dif.sh@18 -- # local sub_id=0 00:28:37.653 07:05:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:37.653 07:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 bdev_null0 00:28:37.653 07:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.653 07:05:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:37.653 07:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 07:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.653 07:05:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:37.653 07:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 07:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.653 07:05:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:37.653 07:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 [2024-05-15 07:05:51.804624] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.653 07:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.653 07:05:51 -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.653 07:05:51 -- target/dif.sh@31 -- # create_subsystem 1 00:28:37.653 07:05:51 -- target/dif.sh@18 -- # local sub_id=1 00:28:37.653 07:05:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:37.653 07:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 bdev_null1 00:28:37.653 07:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.653 07:05:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:37.653 07:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 07:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.653 07:05:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:37.653 07:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 07:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.653 07:05:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:37.653 07:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.653 07:05:51 -- common/autotest_common.sh@10 -- # set +x 00:28:37.653 07:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.653 07:05:51 -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:37.653 07:05:51 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:37.653 07:05:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:37.653 07:05:51 -- nvmf/common.sh@520 -- # config=() 00:28:37.653 07:05:51 -- nvmf/common.sh@520 -- # local subsystem config 00:28:37.653 07:05:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:37.653 07:05:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:37.653 { 00:28:37.653 "params": { 00:28:37.653 "name": "Nvme$subsystem", 00:28:37.653 "trtype": "$TEST_TRANSPORT", 00:28:37.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.653 "adrfam": "ipv4", 00:28:37.653 "trsvcid": "$NVMF_PORT", 00:28:37.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.653 "hdgst": ${hdgst:-false}, 00:28:37.653 "ddgst": ${ddgst:-false} 00:28:37.653 }, 00:28:37.653 "method": "bdev_nvme_attach_controller" 00:28:37.653 } 00:28:37.653 EOF 00:28:37.653 )") 00:28:37.653 07:05:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.653 07:05:51 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.653 07:05:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:37.653 07:05:51 -- target/dif.sh@82 -- # gen_fio_conf 00:28:37.653 07:05:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:37.653 07:05:51 -- target/dif.sh@54 -- # local file 00:28:37.653 07:05:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:37.653 07:05:51 -- target/dif.sh@56 -- # cat 00:28:37.653 07:05:51 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.653 07:05:51 -- common/autotest_common.sh@1320 -- # shift 00:28:37.653 07:05:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:37.653 07:05:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.653 07:05:51 -- nvmf/common.sh@542 -- # cat 00:28:37.653 07:05:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.653 07:05:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:37.653 07:05:51 -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.653 07:05:51 -- target/dif.sh@73 -- # cat 00:28:37.653 07:05:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:37.653 07:05:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:37.653 07:05:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:37.653 07:05:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:37.653 { 00:28:37.653 "params": { 00:28:37.653 "name": "Nvme$subsystem", 00:28:37.653 "trtype": "$TEST_TRANSPORT", 00:28:37.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.653 "adrfam": "ipv4", 00:28:37.653 "trsvcid": "$NVMF_PORT", 00:28:37.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.653 "hdgst": ${hdgst:-false}, 00:28:37.653 "ddgst": ${ddgst:-false} 00:28:37.653 }, 00:28:37.653 "method": "bdev_nvme_attach_controller" 00:28:37.653 } 00:28:37.653 EOF 00:28:37.653 )") 00:28:37.653 07:05:51 -- nvmf/common.sh@542 -- # cat 00:28:37.653 07:05:51 -- target/dif.sh@72 -- # (( file++ )) 00:28:37.653 07:05:51 -- target/dif.sh@72 -- # (( file <= files )) 00:28:37.653 07:05:51 -- nvmf/common.sh@544 -- # jq . 00:28:37.653 07:05:51 -- nvmf/common.sh@545 -- # IFS=, 00:28:37.653 07:05:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:37.653 "params": { 00:28:37.653 "name": "Nvme0", 00:28:37.653 "trtype": "tcp", 00:28:37.653 "traddr": "10.0.0.2", 00:28:37.653 "adrfam": "ipv4", 00:28:37.653 "trsvcid": "4420", 00:28:37.653 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:37.653 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:37.653 "hdgst": false, 00:28:37.653 "ddgst": false 00:28:37.653 }, 00:28:37.653 "method": "bdev_nvme_attach_controller" 00:28:37.653 },{ 00:28:37.653 "params": { 00:28:37.653 "name": "Nvme1", 00:28:37.653 "trtype": "tcp", 00:28:37.653 "traddr": "10.0.0.2", 00:28:37.653 "adrfam": "ipv4", 00:28:37.653 "trsvcid": "4420", 00:28:37.653 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.653 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.653 "hdgst": false, 00:28:37.653 "ddgst": false 00:28:37.653 }, 00:28:37.653 "method": "bdev_nvme_attach_controller" 00:28:37.653 }' 00:28:37.653 07:05:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:37.654 07:05:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:37.654 07:05:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:37.654 07:05:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:37.654 07:05:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:37.654 07:05:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:37.654 07:05:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:37.654 07:05:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:37.654 07:05:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:37.654 07:05:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:37.912 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:37.912 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:37.912 fio-3.35 00:28:37.912 Starting 2 threads 00:28:37.912 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.478 [2024-05-15 07:05:52.602155] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:38.478 [2024-05-15 07:05:52.602238] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:50.669 00:28:50.669 filename0: (groupid=0, jobs=1): err= 0: pid=641213: Wed May 15 07:06:02 2024 00:28:50.669 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:28:50.669 slat (nsec): min=6824, max=39718, avg=9575.21, stdev=3565.25 00:28:50.669 clat (usec): min=40958, max=43922, avg=41970.15, stdev=215.87 00:28:50.669 lat (usec): min=40965, max=43962, avg=41979.72, stdev=216.50 00:28:50.669 clat percentiles (usec): 00:28:50.669 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:28:50.669 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:28:50.669 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:28:50.669 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:28:50.669 | 99.99th=[43779] 00:28:50.669 bw ( KiB/s): min= 352, max= 384, per=49.88%, avg=380.80, stdev= 9.85, samples=20 00:28:50.669 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:28:50.669 lat (msec) : 50=100.00% 00:28:50.669 cpu : usr=94.43%, sys=5.27%, ctx=13, majf=0, minf=76 00:28:50.669 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.669 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.669 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:50.669 filename1: (groupid=0, jobs=1): err= 0: pid=641214: Wed May 15 07:06:02 2024 00:28:50.669 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10037msec) 00:28:50.669 slat (nsec): min=6653, max=76481, avg=9921.12, stdev=4588.48 00:28:50.669 clat (usec): min=40964, max=43897, avg=41965.03, stdev=257.21 00:28:50.669 lat (usec): min=40971, max=43937, avg=41974.95, stdev=257.58 00:28:50.669 clat percentiles (usec): 00:28:50.669 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:28:50.669 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:28:50.669 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:28:50.669 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:28:50.669 | 99.99th=[43779] 00:28:50.669 bw ( KiB/s): min= 352, max= 384, per=49.88%, avg=380.80, stdev= 9.85, samples=20 00:28:50.669 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:28:50.669 lat (msec) : 50=100.00% 00:28:50.669 cpu : usr=94.99%, sys=4.70%, ctx=28, majf=0, minf=269 00:28:50.669 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.669 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.669 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:50.669 00:28:50.669 Run status group 0 (all jobs): 00:28:50.669 READ: bw=762KiB/s (780kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7648KiB (7832kB), run=10037-10038msec 00:28:50.669 07:06:02 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:50.669 07:06:02 -- target/dif.sh@43 -- # local sub 00:28:50.669 07:06:02 -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.669 07:06:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:50.669 07:06:02 -- target/dif.sh@36 -- # local sub_id=0 00:28:50.669 07:06:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:50.669 07:06:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.669 07:06:02 -- common/autotest_common.sh@10 -- # set +x 00:28:50.669 07:06:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.669 07:06:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:50.669 07:06:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.669 07:06:02 -- common/autotest_common.sh@10 -- # set +x 00:28:50.669 07:06:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.669 07:06:02 -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.669 07:06:02 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:50.669 07:06:02 -- target/dif.sh@36 -- # local sub_id=1 00:28:50.669 07:06:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:50.669 07:06:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.669 07:06:02 -- common/autotest_common.sh@10 -- # set +x 00:28:50.669 07:06:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.669 07:06:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:50.669 07:06:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.669 07:06:03 -- common/autotest_common.sh@10 -- # set +x 00:28:50.669 07:06:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.669 00:28:50.669 real 0m11.242s 00:28:50.669 user 0m20.298s 00:28:50.669 sys 0m1.282s 00:28:50.669 07:06:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.669 07:06:03 -- common/autotest_common.sh@10 -- # set +x 00:28:50.669 ************************************ 00:28:50.669 END TEST fio_dif_1_multi_subsystems 00:28:50.669 ************************************ 00:28:50.669 07:06:03 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:50.669 07:06:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:50.669 07:06:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:50.669 07:06:03 -- common/autotest_common.sh@10 -- # set +x 00:28:50.669 ************************************ 00:28:50.669 START TEST fio_dif_rand_params 00:28:50.669 ************************************ 00:28:50.669 07:06:03 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:28:50.669 07:06:03 -- target/dif.sh@100 -- # local NULL_DIF 00:28:50.669 07:06:03 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:50.669 07:06:03 -- target/dif.sh@103 -- # NULL_DIF=3 00:28:50.669 07:06:03 -- target/dif.sh@103 -- # bs=128k 00:28:50.669 07:06:03 -- target/dif.sh@103 -- # numjobs=3 00:28:50.669 07:06:03 -- target/dif.sh@103 -- # iodepth=3 00:28:50.669 07:06:03 -- target/dif.sh@103 -- # runtime=5 00:28:50.669 07:06:03 -- target/dif.sh@105 -- # create_subsystems 0 00:28:50.669 07:06:03 -- target/dif.sh@28 -- # local sub 00:28:50.669 07:06:03 -- target/dif.sh@30 -- # for sub in "$@" 00:28:50.669 07:06:03 -- target/dif.sh@31 -- # create_subsystem 0 00:28:50.669 07:06:03 -- target/dif.sh@18 -- # local sub_id=0 00:28:50.669 07:06:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:50.669 07:06:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.669 07:06:03 -- common/autotest_common.sh@10 -- # set +x 00:28:50.669 bdev_null0 00:28:50.669 07:06:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.669 07:06:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:50.669 07:06:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.669 07:06:03 -- common/autotest_common.sh@10 -- # set +x 00:28:50.669 07:06:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.669 07:06:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:50.669 07:06:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.669 07:06:03 -- common/autotest_common.sh@10 -- # set +x 00:28:50.669 07:06:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.669 07:06:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:50.669 07:06:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.669 07:06:03 -- common/autotest_common.sh@10 -- # set +x 00:28:50.669 [2024-05-15 07:06:03.079995] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.669 07:06:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.669 07:06:03 -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:50.669 07:06:03 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:50.669 07:06:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:50.669 07:06:03 -- nvmf/common.sh@520 -- # config=() 00:28:50.669 07:06:03 -- nvmf/common.sh@520 -- # local subsystem config 00:28:50.669 07:06:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:50.669 07:06:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.669 07:06:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:50.669 { 00:28:50.669 "params": { 00:28:50.669 "name": "Nvme$subsystem", 00:28:50.669 "trtype": "$TEST_TRANSPORT", 00:28:50.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.669 "adrfam": "ipv4", 00:28:50.669 "trsvcid": "$NVMF_PORT", 00:28:50.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.669 "hdgst": ${hdgst:-false}, 00:28:50.669 "ddgst": ${ddgst:-false} 00:28:50.669 }, 00:28:50.669 "method": "bdev_nvme_attach_controller" 00:28:50.669 } 00:28:50.669 EOF 00:28:50.669 )") 00:28:50.669 07:06:03 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.669 07:06:03 -- target/dif.sh@82 -- # gen_fio_conf 00:28:50.669 07:06:03 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:50.669 07:06:03 -- target/dif.sh@54 -- # local file 00:28:50.669 07:06:03 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:50.669 07:06:03 -- target/dif.sh@56 -- # cat 00:28:50.669 07:06:03 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:50.669 07:06:03 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.669 07:06:03 -- common/autotest_common.sh@1320 -- # shift 00:28:50.669 07:06:03 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:50.669 07:06:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.669 07:06:03 -- nvmf/common.sh@542 -- # cat 00:28:50.669 07:06:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.669 07:06:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:50.669 07:06:03 -- target/dif.sh@72 -- # (( file <= files )) 00:28:50.669 07:06:03 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:50.669 07:06:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:50.669 07:06:03 -- nvmf/common.sh@544 -- # jq . 00:28:50.669 07:06:03 -- nvmf/common.sh@545 -- # IFS=, 00:28:50.669 07:06:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:50.669 "params": { 00:28:50.669 "name": "Nvme0", 00:28:50.669 "trtype": "tcp", 00:28:50.669 "traddr": "10.0.0.2", 00:28:50.669 "adrfam": "ipv4", 00:28:50.669 "trsvcid": "4420", 00:28:50.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:50.669 "hdgst": false, 00:28:50.669 "ddgst": false 00:28:50.669 }, 00:28:50.669 "method": "bdev_nvme_attach_controller" 00:28:50.669 }' 00:28:50.669 07:06:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:50.669 07:06:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:50.669 07:06:03 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.669 07:06:03 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.669 07:06:03 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:50.669 07:06:03 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:50.669 07:06:03 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:50.669 07:06:03 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:50.669 07:06:03 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:50.669 07:06:03 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.669 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:50.669 ... 00:28:50.669 fio-3.35 00:28:50.669 Starting 3 threads 00:28:50.670 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.670 [2024-05-15 07:06:03.890098] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:50.670 [2024-05-15 07:06:03.890183] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:54.852 00:28:54.852 filename0: (groupid=0, jobs=1): err= 0: pid=642763: Wed May 15 07:06:09 2024 00:28:54.852 read: IOPS=140, BW=17.6MiB/s (18.5MB/s)(88.1MiB/5006msec) 00:28:54.852 slat (nsec): min=5035, max=42683, avg=11144.93, stdev=3693.42 00:28:54.852 clat (usec): min=7778, max=58368, avg=21277.51, stdev=16720.20 00:28:54.852 lat (usec): min=7790, max=58381, avg=21288.66, stdev=16719.92 00:28:54.852 clat percentiles (usec): 00:28:54.852 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11469], 00:28:54.852 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13566], 60.00th=[14484], 00:28:54.852 | 70.00th=[15664], 80.00th=[50594], 90.00th=[54264], 95.00th=[54789], 00:28:54.852 | 99.00th=[56886], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:28:54.852 | 99.99th=[58459] 00:28:54.852 bw ( KiB/s): min=13824, max=21760, per=26.76%, avg=17971.20, stdev=2684.41, samples=10 00:28:54.852 iops : min= 108, max= 170, avg=140.40, stdev=20.97, samples=10 00:28:54.852 lat (msec) : 10=5.82%, 20=73.76%, 100=20.43% 00:28:54.852 cpu : usr=90.89%, sys=8.61%, ctx=7, majf=0, minf=96 00:28:54.852 IO depths : 1=8.5%, 2=91.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:54.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.852 issued rwts: total=705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:54.852 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:54.852 filename0: (groupid=0, jobs=1): err= 0: pid=642764: Wed May 15 07:06:09 2024 00:28:54.852 read: IOPS=210, BW=26.3MiB/s (27.5MB/s)(132MiB/5029msec) 00:28:54.852 slat (nsec): min=4700, max=65774, avg=12221.25, stdev=3417.40 00:28:54.852 clat (usec): min=6327, max=91891, avg=14252.31, stdev=12497.36 00:28:54.852 lat (usec): min=6339, max=91903, avg=14264.53, stdev=12497.16 00:28:54.852 clat percentiles (usec): 00:28:54.852 | 1.00th=[ 6390], 5.00th=[ 6980], 10.00th=[ 7439], 20.00th=[ 8455], 00:28:54.852 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11207], 00:28:54.852 | 70.00th=[12387], 80.00th=[13960], 90.00th=[16712], 95.00th=[52167], 00:28:54.852 | 99.00th=[55313], 99.50th=[55837], 99.90th=[91751], 99.95th=[91751], 00:28:54.852 | 99.99th=[91751] 00:28:54.852 bw ( KiB/s): min=17152, max=35840, per=40.19%, avg=26986.50, stdev=6425.89, samples=10 00:28:54.852 iops : min= 134, max= 280, avg=210.80, stdev=50.24, samples=10 00:28:54.852 lat (msec) : 10=43.42%, 20=47.97%, 50=1.04%, 100=7.57% 00:28:54.852 cpu : usr=89.78%, sys=9.61%, ctx=11, majf=0, minf=151 00:28:54.852 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:54.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.852 issued rwts: total=1057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:54.852 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:54.852 filename0: (groupid=0, jobs=1): err= 0: pid=642765: Wed May 15 07:06:09 2024 00:28:54.852 read: IOPS=175, BW=21.9MiB/s (23.0MB/s)(111MiB/5044msec) 00:28:54.852 slat (nsec): min=4475, max=32021, avg=11458.75, stdev=3164.16 00:28:54.852 clat (usec): min=6750, max=58618, avg=17051.76, stdev=14988.47 00:28:54.852 lat (usec): min=6761, max=58629, avg=17063.22, stdev=14988.38 00:28:54.852 clat percentiles (usec): 00:28:54.852 | 1.00th=[ 6915], 5.00th=[ 7570], 10.00th=[ 8160], 20.00th=[ 9110], 00:28:54.852 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11076], 60.00th=[12256], 00:28:54.852 | 70.00th=[13566], 80.00th=[15401], 90.00th=[52167], 95.00th=[54264], 00:28:54.852 | 99.00th=[56361], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:28:54.852 | 99.99th=[58459] 00:28:54.852 bw ( KiB/s): min=12032, max=30720, per=33.63%, avg=22579.20, stdev=5279.37, samples=10 00:28:54.852 iops : min= 94, max= 240, avg=176.40, stdev=41.25, samples=10 00:28:54.852 lat (msec) : 10=33.82%, 20=52.04%, 50=0.90%, 100=13.24% 00:28:54.852 cpu : usr=90.54%, sys=8.78%, ctx=8, majf=0, minf=89 00:28:54.852 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:54.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:54.852 issued rwts: total=884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:54.852 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:54.852 00:28:54.852 Run status group 0 (all jobs): 00:28:54.852 READ: bw=65.6MiB/s (68.8MB/s), 17.6MiB/s-26.3MiB/s (18.5MB/s-27.5MB/s), io=331MiB (347MB), run=5006-5044msec 00:28:55.111 07:06:09 -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:55.111 07:06:09 -- target/dif.sh@43 -- # local sub 00:28:55.111 07:06:09 -- target/dif.sh@45 -- # for sub in "$@" 00:28:55.111 07:06:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:55.111 07:06:09 -- target/dif.sh@36 -- # local sub_id=0 00:28:55.111 07:06:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:55.111 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.111 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.111 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.111 07:06:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:55.111 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.111 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.111 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.111 07:06:09 -- target/dif.sh@109 -- # NULL_DIF=2 00:28:55.111 07:06:09 -- target/dif.sh@109 -- # bs=4k 00:28:55.111 07:06:09 -- target/dif.sh@109 -- # numjobs=8 00:28:55.111 07:06:09 -- target/dif.sh@109 -- # iodepth=16 00:28:55.111 07:06:09 -- target/dif.sh@109 -- # runtime= 00:28:55.111 07:06:09 -- target/dif.sh@109 -- # files=2 00:28:55.111 07:06:09 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:55.111 07:06:09 -- target/dif.sh@28 -- # local sub 00:28:55.111 07:06:09 -- target/dif.sh@30 -- # for sub in "$@" 00:28:55.111 07:06:09 -- target/dif.sh@31 -- # create_subsystem 0 00:28:55.111 07:06:09 -- target/dif.sh@18 -- # local sub_id=0 00:28:55.111 07:06:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:55.111 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.111 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.111 bdev_null0 00:28:55.111 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.111 07:06:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:55.111 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.111 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.111 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.111 07:06:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:55.111 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.111 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.111 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.111 07:06:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:55.111 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.111 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.111 [2024-05-15 07:06:09.326410] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.111 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.111 07:06:09 -- target/dif.sh@30 -- # for sub in "$@" 00:28:55.111 07:06:09 -- target/dif.sh@31 -- # create_subsystem 1 00:28:55.111 07:06:09 -- target/dif.sh@18 -- # local sub_id=1 00:28:55.111 07:06:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:55.111 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.111 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.111 bdev_null1 00:28:55.111 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.111 07:06:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:55.111 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.111 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.370 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.370 07:06:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:55.370 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.370 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.370 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.370 07:06:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.370 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.370 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.370 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.370 07:06:09 -- target/dif.sh@30 -- # for sub in "$@" 00:28:55.370 07:06:09 -- target/dif.sh@31 -- # create_subsystem 2 00:28:55.370 07:06:09 -- target/dif.sh@18 -- # local sub_id=2 00:28:55.370 07:06:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:55.370 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.370 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.370 bdev_null2 00:28:55.370 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.370 07:06:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:55.370 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.370 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.370 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.370 07:06:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:55.370 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.370 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.370 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.370 07:06:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:55.370 07:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.370 07:06:09 -- common/autotest_common.sh@10 -- # set +x 00:28:55.370 07:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.370 07:06:09 -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:55.370 07:06:09 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:55.370 07:06:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:55.370 07:06:09 -- nvmf/common.sh@520 -- # config=() 00:28:55.370 07:06:09 -- nvmf/common.sh@520 -- # local subsystem config 00:28:55.370 07:06:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:55.370 07:06:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.370 07:06:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:55.370 { 00:28:55.370 "params": { 00:28:55.370 "name": "Nvme$subsystem", 00:28:55.370 "trtype": "$TEST_TRANSPORT", 00:28:55.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.370 "adrfam": "ipv4", 00:28:55.370 "trsvcid": "$NVMF_PORT", 00:28:55.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.370 "hdgst": ${hdgst:-false}, 00:28:55.370 "ddgst": ${ddgst:-false} 00:28:55.370 }, 00:28:55.370 "method": "bdev_nvme_attach_controller" 00:28:55.370 } 00:28:55.370 EOF 00:28:55.370 )") 00:28:55.370 07:06:09 -- target/dif.sh@82 -- # gen_fio_conf 00:28:55.370 07:06:09 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.370 07:06:09 -- target/dif.sh@54 -- # local file 00:28:55.370 07:06:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:55.370 07:06:09 -- target/dif.sh@56 -- # cat 00:28:55.370 07:06:09 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:55.370 07:06:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:55.370 07:06:09 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.370 07:06:09 -- common/autotest_common.sh@1320 -- # shift 00:28:55.370 07:06:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:55.370 07:06:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:55.370 07:06:09 -- nvmf/common.sh@542 -- # cat 00:28:55.370 07:06:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:55.370 07:06:09 -- target/dif.sh@72 -- # (( file <= files )) 00:28:55.370 07:06:09 -- target/dif.sh@73 -- # cat 00:28:55.370 07:06:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.370 07:06:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:55.370 07:06:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:55.370 07:06:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:55.370 07:06:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:55.370 { 00:28:55.370 "params": { 00:28:55.370 "name": "Nvme$subsystem", 00:28:55.370 "trtype": "$TEST_TRANSPORT", 00:28:55.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.370 "adrfam": "ipv4", 00:28:55.370 "trsvcid": "$NVMF_PORT", 00:28:55.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.370 "hdgst": ${hdgst:-false}, 00:28:55.370 "ddgst": ${ddgst:-false} 00:28:55.370 }, 00:28:55.370 "method": "bdev_nvme_attach_controller" 00:28:55.370 } 00:28:55.370 EOF 00:28:55.370 )") 00:28:55.370 07:06:09 -- target/dif.sh@72 -- # (( file++ )) 00:28:55.370 07:06:09 -- target/dif.sh@72 -- # (( file <= files )) 00:28:55.370 07:06:09 -- target/dif.sh@73 -- # cat 00:28:55.370 07:06:09 -- nvmf/common.sh@542 -- # cat 00:28:55.370 07:06:09 -- target/dif.sh@72 -- # (( file++ )) 00:28:55.370 07:06:09 -- target/dif.sh@72 -- # (( file <= files )) 00:28:55.370 07:06:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:55.370 07:06:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:55.370 { 00:28:55.370 "params": { 00:28:55.370 "name": "Nvme$subsystem", 00:28:55.370 "trtype": "$TEST_TRANSPORT", 00:28:55.370 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:55.370 "adrfam": "ipv4", 00:28:55.370 "trsvcid": "$NVMF_PORT", 00:28:55.370 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:55.370 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:55.370 "hdgst": ${hdgst:-false}, 00:28:55.370 "ddgst": ${ddgst:-false} 00:28:55.370 }, 00:28:55.370 "method": "bdev_nvme_attach_controller" 00:28:55.370 } 00:28:55.370 EOF 00:28:55.370 )") 00:28:55.370 07:06:09 -- nvmf/common.sh@542 -- # cat 00:28:55.370 07:06:09 -- nvmf/common.sh@544 -- # jq . 00:28:55.370 07:06:09 -- nvmf/common.sh@545 -- # IFS=, 00:28:55.370 07:06:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:55.370 "params": { 00:28:55.370 "name": "Nvme0", 00:28:55.370 "trtype": "tcp", 00:28:55.370 "traddr": "10.0.0.2", 00:28:55.370 "adrfam": "ipv4", 00:28:55.370 "trsvcid": "4420", 00:28:55.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:55.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:55.370 "hdgst": false, 00:28:55.370 "ddgst": false 00:28:55.370 }, 00:28:55.370 "method": "bdev_nvme_attach_controller" 00:28:55.370 },{ 00:28:55.370 "params": { 00:28:55.370 "name": "Nvme1", 00:28:55.370 "trtype": "tcp", 00:28:55.370 "traddr": "10.0.0.2", 00:28:55.370 "adrfam": "ipv4", 00:28:55.370 "trsvcid": "4420", 00:28:55.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:55.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:55.370 "hdgst": false, 00:28:55.370 "ddgst": false 00:28:55.370 }, 00:28:55.370 "method": "bdev_nvme_attach_controller" 00:28:55.370 },{ 00:28:55.370 "params": { 00:28:55.370 "name": "Nvme2", 00:28:55.370 "trtype": "tcp", 00:28:55.370 "traddr": "10.0.0.2", 00:28:55.371 "adrfam": "ipv4", 00:28:55.371 "trsvcid": "4420", 00:28:55.371 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:55.371 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:55.371 "hdgst": false, 00:28:55.371 "ddgst": false 00:28:55.371 }, 00:28:55.371 "method": "bdev_nvme_attach_controller" 00:28:55.371 }' 00:28:55.371 07:06:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:55.371 07:06:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:55.371 07:06:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:55.371 07:06:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:55.371 07:06:09 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:55.371 07:06:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:55.371 07:06:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:55.371 07:06:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:55.371 07:06:09 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:55.371 07:06:09 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:55.629 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:55.629 ... 00:28:55.629 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:55.629 ... 00:28:55.629 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:55.629 ... 00:28:55.629 fio-3.35 00:28:55.629 Starting 24 threads 00:28:55.629 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.565 [2024-05-15 07:06:10.457771] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:56.565 [2024-05-15 07:06:10.457846] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:29:06.579 00:29:06.579 filename0: (groupid=0, jobs=1): err= 0: pid=644159: Wed May 15 07:06:20 2024 00:29:06.579 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10006msec) 00:29:06.579 slat (nsec): min=7704, max=94880, avg=29613.02, stdev=12589.67 00:29:06.579 clat (usec): min=6487, max=67510, avg=32498.00, stdev=4017.58 00:29:06.579 lat (usec): min=6496, max=67545, avg=32527.61, stdev=4017.86 00:29:06.579 clat percentiles (usec): 00:29:06.579 | 1.00th=[20579], 5.00th=[29230], 10.00th=[30802], 20.00th=[31327], 00:29:06.580 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:29:06.580 | 70.00th=[32900], 80.00th=[33162], 90.00th=[34341], 95.00th=[36963], 00:29:06.580 | 99.00th=[47973], 99.50th=[54264], 99.90th=[67634], 99.95th=[67634], 00:29:06.580 | 99.99th=[67634] 00:29:06.580 bw ( KiB/s): min= 1648, max= 2064, per=4.19%, avg=1945.26, stdev=93.50, samples=19 00:29:06.580 iops : min= 412, max= 516, avg=486.32, stdev=23.37, samples=19 00:29:06.580 lat (msec) : 10=0.33%, 20=0.55%, 50=98.34%, 100=0.78% 00:29:06.580 cpu : usr=98.05%, sys=1.17%, ctx=34, majf=0, minf=9 00:29:06.580 IO depths : 1=1.3%, 2=6.4%, 4=21.6%, 8=58.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:29:06.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 complete : 0=0.0%, 4=93.6%, 8=1.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 issued rwts: total=4892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.580 filename0: (groupid=0, jobs=1): err= 0: pid=644160: Wed May 15 07:06:20 2024 00:29:06.580 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10019msec) 00:29:06.580 slat (usec): min=8, max=580, avg=38.33, stdev=21.04 00:29:06.580 clat (usec): min=14354, max=61947, avg=32416.01, stdev=3343.09 00:29:06.580 lat (usec): min=14450, max=61978, avg=32454.34, stdev=3344.40 00:29:06.580 clat percentiles (usec): 00:29:06.580 | 1.00th=[19792], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:29:06.580 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:29:06.580 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[35390], 00:29:06.580 | 99.00th=[48497], 99.50th=[49021], 99.90th=[62129], 99.95th=[62129], 00:29:06.580 | 99.99th=[62129] 00:29:06.580 bw ( KiB/s): min= 1664, max= 2048, per=4.20%, avg=1952.00, stdev=100.66, samples=20 00:29:06.580 iops : min= 416, max= 512, avg=488.00, stdev=25.16, samples=20 00:29:06.580 lat (msec) : 20=1.10%, 50=98.41%, 100=0.49% 00:29:06.580 cpu : usr=91.04%, sys=3.98%, ctx=248, majf=0, minf=9 00:29:06.580 IO depths : 1=5.6%, 2=11.4%, 4=24.0%, 8=52.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:29:06.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.580 filename0: (groupid=0, jobs=1): err= 0: pid=644161: Wed May 15 07:06:20 2024 00:29:06.580 read: IOPS=485, BW=1941KiB/s (1988kB/s)(19.0MiB/10007msec) 00:29:06.580 slat (usec): min=7, max=521, avg=40.22, stdev=27.48 00:29:06.580 clat (usec): min=13890, max=60433, avg=32705.69, stdev=4606.65 00:29:06.580 lat (usec): min=13931, max=60489, avg=32745.90, stdev=4606.52 00:29:06.580 clat percentiles (usec): 00:29:06.580 | 1.00th=[19006], 5.00th=[26346], 10.00th=[30540], 20.00th=[31327], 00:29:06.580 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:29:06.580 | 70.00th=[32900], 80.00th=[33424], 90.00th=[35914], 95.00th=[41157], 00:29:06.580 | 99.00th=[52691], 99.50th=[56361], 99.90th=[58459], 99.95th=[58983], 00:29:06.580 | 99.99th=[60556] 00:29:06.580 bw ( KiB/s): min= 1795, max= 2048, per=4.17%, avg=1936.55, stdev=52.28, samples=20 00:29:06.580 iops : min= 448, max= 512, avg=484.10, stdev=13.18, samples=20 00:29:06.580 lat (msec) : 20=1.44%, 50=97.45%, 100=1.11% 00:29:06.580 cpu : usr=94.72%, sys=2.56%, ctx=160, majf=0, minf=9 00:29:06.580 IO depths : 1=2.2%, 2=5.1%, 4=18.0%, 8=63.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:29:06.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 complete : 0=0.0%, 4=93.1%, 8=1.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 issued rwts: total=4857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.580 filename0: (groupid=0, jobs=1): err= 0: pid=644162: Wed May 15 07:06:20 2024 00:29:06.580 read: IOPS=483, BW=1935KiB/s (1982kB/s)(18.9MiB/10013msec) 00:29:06.580 slat (usec): min=10, max=180, avg=61.83, stdev=28.13 00:29:06.580 clat (usec): min=13069, max=61673, avg=32686.20, stdev=5085.92 00:29:06.580 lat (usec): min=13094, max=61765, avg=32748.03, stdev=5086.22 00:29:06.580 clat percentiles (usec): 00:29:06.580 | 1.00th=[15795], 5.00th=[25560], 10.00th=[30278], 20.00th=[31065], 00:29:06.580 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32637], 00:29:06.580 | 70.00th=[32900], 80.00th=[33424], 90.00th=[36439], 95.00th=[44303], 00:29:06.580 | 99.00th=[50070], 99.50th=[53216], 99.90th=[59507], 99.95th=[60556], 00:29:06.580 | 99.99th=[61604] 00:29:06.580 bw ( KiB/s): min= 1792, max= 2016, per=4.17%, avg=1936.42, stdev=64.25, samples=19 00:29:06.580 iops : min= 448, max= 504, avg=484.11, stdev=16.06, samples=19 00:29:06.580 lat (msec) : 20=2.08%, 50=96.84%, 100=1.07% 00:29:06.580 cpu : usr=98.25%, sys=1.18%, ctx=40, majf=0, minf=9 00:29:06.580 IO depths : 1=1.3%, 2=3.3%, 4=15.1%, 8=67.3%, 16=13.0%, 32=0.0%, >=64=0.0% 00:29:06.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 complete : 0=0.0%, 4=92.6%, 8=3.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 issued rwts: total=4845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.580 filename0: (groupid=0, jobs=1): err= 0: pid=644163: Wed May 15 07:06:20 2024 00:29:06.580 read: IOPS=490, BW=1961KiB/s (2008kB/s)(19.2MiB/10019msec) 00:29:06.580 slat (nsec): min=8689, max=97562, avg=31754.68, stdev=10784.61 00:29:06.580 clat (usec): min=22507, max=60842, avg=32356.56, stdev=2049.04 00:29:06.580 lat (usec): min=22529, max=60860, avg=32388.32, stdev=2049.31 00:29:06.580 clat percentiles (usec): 00:29:06.580 | 1.00th=[29230], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:29:06.580 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:29:06.580 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:29:06.580 | 99.00th=[40633], 99.50th=[41681], 99.90th=[53740], 99.95th=[60556], 00:29:06.580 | 99.99th=[61080] 00:29:06.580 bw ( KiB/s): min= 1664, max= 2048, per=4.22%, avg=1958.40, stdev=93.78, samples=20 00:29:06.580 iops : min= 416, max= 512, avg=489.60, stdev=23.45, samples=20 00:29:06.580 lat (msec) : 50=99.67%, 100=0.33% 00:29:06.580 cpu : usr=97.08%, sys=1.83%, ctx=129, majf=0, minf=9 00:29:06.580 IO depths : 1=5.9%, 2=11.8%, 4=24.7%, 8=51.0%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:06.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.580 filename0: (groupid=0, jobs=1): err= 0: pid=644164: Wed May 15 07:06:20 2024 00:29:06.580 read: IOPS=476, BW=1905KiB/s (1950kB/s)(18.6MiB/10005msec) 00:29:06.580 slat (usec): min=7, max=139, avg=34.96, stdev=22.60 00:29:06.580 clat (usec): min=4711, max=65971, avg=33432.44, stdev=5350.89 00:29:06.580 lat (usec): min=4723, max=66015, avg=33467.39, stdev=5349.26 00:29:06.580 clat percentiles (usec): 00:29:06.580 | 1.00th=[17695], 5.00th=[29754], 10.00th=[30802], 20.00th=[31589], 00:29:06.580 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:29:06.580 | 70.00th=[32900], 80.00th=[33817], 90.00th=[40109], 95.00th=[45876], 00:29:06.580 | 99.00th=[52691], 99.50th=[54264], 99.90th=[57934], 99.95th=[65274], 00:29:06.580 | 99.99th=[65799] 00:29:06.580 bw ( KiB/s): min= 1763, max= 2016, per=4.08%, avg=1894.05, stdev=69.11, samples=19 00:29:06.580 iops : min= 440, max= 504, avg=473.47, stdev=17.36, samples=19 00:29:06.580 lat (msec) : 10=0.34%, 20=1.13%, 50=97.12%, 100=1.41% 00:29:06.580 cpu : usr=95.79%, sys=2.53%, ctx=363, majf=0, minf=9 00:29:06.580 IO depths : 1=0.1%, 2=0.6%, 4=9.0%, 8=74.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:29:06.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 complete : 0=0.0%, 4=91.3%, 8=6.0%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 issued rwts: total=4764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.580 filename0: (groupid=0, jobs=1): err= 0: pid=644165: Wed May 15 07:06:20 2024 00:29:06.580 read: IOPS=475, BW=1901KiB/s (1946kB/s)(18.6MiB/10005msec) 00:29:06.580 slat (usec): min=8, max=1338, avg=47.87, stdev=33.46 00:29:06.580 clat (usec): min=9972, max=57997, avg=33353.07, stdev=5512.40 00:29:06.580 lat (usec): min=9989, max=58088, avg=33400.94, stdev=5515.53 00:29:06.580 clat percentiles (usec): 00:29:06.580 | 1.00th=[17171], 5.00th=[25560], 10.00th=[30540], 20.00th=[31327], 00:29:06.580 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:29:06.580 | 70.00th=[33162], 80.00th=[34341], 90.00th=[40633], 95.00th=[45351], 00:29:06.580 | 99.00th=[52167], 99.50th=[54264], 99.90th=[57934], 99.95th=[57934], 00:29:06.580 | 99.99th=[57934] 00:29:06.580 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1894.05, stdev=70.16, samples=19 00:29:06.580 iops : min= 448, max= 512, avg=473.47, stdev=17.60, samples=19 00:29:06.580 lat (msec) : 10=0.04%, 20=1.68%, 50=96.87%, 100=1.41% 00:29:06.580 cpu : usr=93.19%, sys=3.41%, ctx=392, majf=0, minf=9 00:29:06.580 IO depths : 1=2.0%, 2=4.6%, 4=18.3%, 8=63.8%, 16=11.3%, 32=0.0%, >=64=0.0% 00:29:06.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 complete : 0=0.0%, 4=93.1%, 8=2.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 issued rwts: total=4754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.580 filename0: (groupid=0, jobs=1): err= 0: pid=644166: Wed May 15 07:06:20 2024 00:29:06.580 read: IOPS=469, BW=1877KiB/s (1922kB/s)(18.4MiB/10019msec) 00:29:06.580 slat (usec): min=7, max=594, avg=44.11, stdev=29.73 00:29:06.580 clat (usec): min=12405, max=61958, avg=33761.22, stdev=6307.26 00:29:06.580 lat (usec): min=12479, max=62038, avg=33805.33, stdev=6310.28 00:29:06.580 clat percentiles (usec): 00:29:06.580 | 1.00th=[16188], 5.00th=[27132], 10.00th=[30802], 20.00th=[31589], 00:29:06.580 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:29:06.580 | 70.00th=[33162], 80.00th=[34866], 90.00th=[42206], 95.00th=[47449], 00:29:06.580 | 99.00th=[55313], 99.50th=[56886], 99.90th=[61604], 99.95th=[61604], 00:29:06.580 | 99.99th=[62129] 00:29:06.580 bw ( KiB/s): min= 1536, max= 2048, per=4.04%, avg=1874.40, stdev=125.40, samples=20 00:29:06.580 iops : min= 384, max= 512, avg=468.60, stdev=31.35, samples=20 00:29:06.580 lat (msec) : 20=2.66%, 50=94.41%, 100=2.93% 00:29:06.580 cpu : usr=94.47%, sys=2.85%, ctx=68, majf=0, minf=9 00:29:06.580 IO depths : 1=3.4%, 2=7.1%, 4=20.1%, 8=60.2%, 16=9.3%, 32=0.0%, >=64=0.0% 00:29:06.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.580 issued rwts: total=4702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.580 filename1: (groupid=0, jobs=1): err= 0: pid=644167: Wed May 15 07:06:20 2024 00:29:06.580 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10005msec) 00:29:06.580 slat (usec): min=5, max=340, avg=55.88, stdev=26.15 00:29:06.580 clat (usec): min=7599, max=58459, avg=33430.18, stdev=5351.30 00:29:06.580 lat (usec): min=7641, max=58532, avg=33486.06, stdev=5350.63 00:29:06.580 clat percentiles (usec): 00:29:06.580 | 1.00th=[18744], 5.00th=[28443], 10.00th=[30540], 20.00th=[31327], 00:29:06.580 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:29:06.580 | 70.00th=[33162], 80.00th=[33817], 90.00th=[40109], 95.00th=[45351], 00:29:06.580 | 99.00th=[52691], 99.50th=[55313], 99.90th=[58459], 99.95th=[58459], 00:29:06.580 | 99.99th=[58459] 00:29:06.580 bw ( KiB/s): min= 1760, max= 2016, per=4.09%, avg=1898.95, stdev=65.28, samples=19 00:29:06.580 iops : min= 440, max= 504, avg=474.74, stdev=16.32, samples=19 00:29:06.580 lat (msec) : 10=0.04%, 20=1.37%, 50=96.51%, 100=2.08% 00:29:06.580 cpu : usr=90.58%, sys=3.91%, ctx=135, majf=0, minf=9 00:29:06.581 IO depths : 1=0.2%, 2=0.8%, 4=8.3%, 8=75.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:29:06.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 complete : 0=0.0%, 4=91.0%, 8=6.2%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 issued rwts: total=4750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.581 filename1: (groupid=0, jobs=1): err= 0: pid=644168: Wed May 15 07:06:20 2024 00:29:06.581 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.4MiB/10012msec) 00:29:06.581 slat (nsec): min=7805, max=88218, avg=25778.27, stdev=14073.07 00:29:06.581 clat (usec): min=2966, max=74428, avg=32120.73, stdev=4844.52 00:29:06.581 lat (usec): min=2975, max=74481, avg=32146.51, stdev=4845.52 00:29:06.581 clat percentiles (usec): 00:29:06.581 | 1.00th=[14353], 5.00th=[24511], 10.00th=[30540], 20.00th=[31327], 00:29:06.581 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:29:06.581 | 70.00th=[32637], 80.00th=[33162], 90.00th=[33817], 95.00th=[35914], 00:29:06.581 | 99.00th=[47973], 99.50th=[56361], 99.90th=[73925], 99.95th=[73925], 00:29:06.581 | 99.99th=[73925] 00:29:06.581 bw ( KiB/s): min= 1664, max= 2112, per=4.26%, avg=1976.42, stdev=100.54, samples=19 00:29:06.581 iops : min= 416, max= 528, avg=494.11, stdev=25.13, samples=19 00:29:06.581 lat (msec) : 4=0.08%, 10=0.08%, 20=2.08%, 50=96.81%, 100=0.95% 00:29:06.581 cpu : usr=98.64%, sys=0.96%, ctx=20, majf=0, minf=9 00:29:06.581 IO depths : 1=4.0%, 2=8.7%, 4=22.3%, 8=56.1%, 16=9.0%, 32=0.0%, >=64=0.0% 00:29:06.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 issued rwts: total=4954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.581 filename1: (groupid=0, jobs=1): err= 0: pid=644169: Wed May 15 07:06:20 2024 00:29:06.581 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10019msec) 00:29:06.581 slat (usec): min=7, max=136, avg=40.19, stdev=23.84 00:29:06.581 clat (usec): min=8862, max=60983, avg=32068.93, stdev=3723.89 00:29:06.581 lat (usec): min=8886, max=61001, avg=32109.11, stdev=3722.58 00:29:06.581 clat percentiles (usec): 00:29:06.581 | 1.00th=[17957], 5.00th=[27919], 10.00th=[30540], 20.00th=[31065], 00:29:06.581 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:29:06.581 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[35914], 00:29:06.581 | 99.00th=[44303], 99.50th=[49021], 99.90th=[53740], 99.95th=[60556], 00:29:06.581 | 99.99th=[61080] 00:29:06.581 bw ( KiB/s): min= 1792, max= 2144, per=4.25%, avg=1974.80, stdev=77.57, samples=20 00:29:06.581 iops : min= 448, max= 536, avg=493.70, stdev=19.39, samples=20 00:29:06.581 lat (msec) : 10=0.08%, 20=1.80%, 50=97.80%, 100=0.32% 00:29:06.581 cpu : usr=98.42%, sys=1.10%, ctx=21, majf=0, minf=9 00:29:06.581 IO depths : 1=3.1%, 2=6.2%, 4=19.7%, 8=61.1%, 16=9.9%, 32=0.0%, >=64=0.0% 00:29:06.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 complete : 0=0.0%, 4=93.6%, 8=1.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 issued rwts: total=4953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.581 filename1: (groupid=0, jobs=1): err= 0: pid=644170: Wed May 15 07:06:20 2024 00:29:06.581 read: IOPS=498, BW=1996KiB/s (2044kB/s)(19.5MiB/10005msec) 00:29:06.581 slat (usec): min=7, max=126, avg=42.30, stdev=23.56 00:29:06.581 clat (usec): min=7984, max=58208, avg=31727.77, stdev=3967.35 00:29:06.581 lat (usec): min=7994, max=58249, avg=31770.07, stdev=3972.14 00:29:06.581 clat percentiles (usec): 00:29:06.581 | 1.00th=[15139], 5.00th=[23987], 10.00th=[30278], 20.00th=[31065], 00:29:06.581 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:29:06.581 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[35390], 00:29:06.581 | 99.00th=[43779], 99.50th=[48497], 99.90th=[57934], 99.95th=[57934], 00:29:06.581 | 99.99th=[58459] 00:29:06.581 bw ( KiB/s): min= 1920, max= 2224, per=4.29%, avg=1994.11, stdev=100.52, samples=19 00:29:06.581 iops : min= 480, max= 556, avg=498.53, stdev=25.13, samples=19 00:29:06.581 lat (msec) : 10=0.26%, 20=2.22%, 50=97.10%, 100=0.42% 00:29:06.581 cpu : usr=97.96%, sys=1.47%, ctx=69, majf=0, minf=9 00:29:06.581 IO depths : 1=4.3%, 2=9.1%, 4=21.8%, 8=56.4%, 16=8.4%, 32=0.0%, >=64=0.0% 00:29:06.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.581 filename1: (groupid=0, jobs=1): err= 0: pid=644171: Wed May 15 07:06:20 2024 00:29:06.581 read: IOPS=457, BW=1832KiB/s (1875kB/s)(17.9MiB/10020msec) 00:29:06.581 slat (usec): min=13, max=1174, avg=66.09, stdev=39.84 00:29:06.581 clat (usec): min=10051, max=63150, avg=34539.83, stdev=6398.09 00:29:06.581 lat (usec): min=10141, max=63261, avg=34605.92, stdev=6398.13 00:29:06.581 clat percentiles (usec): 00:29:06.581 | 1.00th=[18482], 5.00th=[27132], 10.00th=[30540], 20.00th=[31589], 00:29:06.581 | 30.00th=[31851], 40.00th=[32375], 50.00th=[32637], 60.00th=[33162], 00:29:06.581 | 70.00th=[34341], 80.00th=[39060], 90.00th=[44303], 95.00th=[47973], 00:29:06.581 | 99.00th=[52691], 99.50th=[53740], 99.90th=[62129], 99.95th=[63177], 00:29:06.581 | 99.99th=[63177] 00:29:06.581 bw ( KiB/s): min= 1632, max= 2016, per=3.94%, avg=1828.80, stdev=127.03, samples=20 00:29:06.581 iops : min= 408, max= 504, avg=457.20, stdev=31.76, samples=20 00:29:06.581 lat (msec) : 20=1.77%, 50=95.44%, 100=2.79% 00:29:06.581 cpu : usr=90.44%, sys=4.02%, ctx=247, majf=0, minf=9 00:29:06.581 IO depths : 1=1.9%, 2=4.0%, 4=14.6%, 8=67.3%, 16=12.2%, 32=0.0%, >=64=0.0% 00:29:06.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 complete : 0=0.0%, 4=92.2%, 8=3.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 issued rwts: total=4588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.581 filename1: (groupid=0, jobs=1): err= 0: pid=644172: Wed May 15 07:06:20 2024 00:29:06.581 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10005msec) 00:29:06.581 slat (usec): min=7, max=2085, avg=57.40, stdev=86.43 00:29:06.581 clat (usec): min=4900, max=60725, avg=33080.85, stdev=5497.58 00:29:06.581 lat (usec): min=4909, max=60734, avg=33138.25, stdev=5496.52 00:29:06.581 clat percentiles (usec): 00:29:06.581 | 1.00th=[17957], 5.00th=[25297], 10.00th=[30278], 20.00th=[31327], 00:29:06.581 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:29:06.581 | 70.00th=[32900], 80.00th=[34341], 90.00th=[39060], 95.00th=[44827], 00:29:06.581 | 99.00th=[51119], 99.50th=[53740], 99.90th=[60031], 99.95th=[60556], 00:29:06.581 | 99.99th=[60556] 00:29:06.581 bw ( KiB/s): min= 1760, max= 2016, per=4.11%, avg=1910.74, stdev=76.93, samples=19 00:29:06.581 iops : min= 440, max= 504, avg=477.68, stdev=19.23, samples=19 00:29:06.581 lat (msec) : 10=0.33%, 20=1.60%, 50=96.62%, 100=1.44% 00:29:06.581 cpu : usr=88.64%, sys=4.45%, ctx=255, majf=0, minf=9 00:29:06.581 IO depths : 1=0.2%, 2=0.6%, 4=8.6%, 8=75.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:29:06.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 complete : 0=0.0%, 4=90.5%, 8=6.4%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.581 filename1: (groupid=0, jobs=1): err= 0: pid=644173: Wed May 15 07:06:20 2024 00:29:06.581 read: IOPS=489, BW=1959KiB/s (2007kB/s)(19.2MiB/10019msec) 00:29:06.581 slat (usec): min=7, max=336, avg=29.71, stdev=17.90 00:29:06.581 clat (usec): min=11391, max=60795, avg=32443.02, stdev=3526.09 00:29:06.581 lat (usec): min=11415, max=60999, avg=32472.73, stdev=3528.01 00:29:06.581 clat percentiles (usec): 00:29:06.581 | 1.00th=[20317], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:29:06.581 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:29:06.581 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[36439], 00:29:06.581 | 99.00th=[49021], 99.50th=[54264], 99.90th=[56886], 99.95th=[60556], 00:29:06.581 | 99.99th=[60556] 00:29:06.581 bw ( KiB/s): min= 1632, max= 2048, per=4.21%, avg=1956.80, stdev=91.41, samples=20 00:29:06.581 iops : min= 408, max= 512, avg=489.20, stdev=22.85, samples=20 00:29:06.581 lat (msec) : 20=0.94%, 50=98.41%, 100=0.65% 00:29:06.581 cpu : usr=97.04%, sys=1.64%, ctx=44, majf=0, minf=9 00:29:06.581 IO depths : 1=2.9%, 2=6.2%, 4=17.4%, 8=62.5%, 16=11.0%, 32=0.0%, >=64=0.0% 00:29:06.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 complete : 0=0.0%, 4=92.9%, 8=2.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 issued rwts: total=4908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.581 filename1: (groupid=0, jobs=1): err= 0: pid=644174: Wed May 15 07:06:20 2024 00:29:06.581 read: IOPS=477, BW=1911KiB/s (1957kB/s)(18.7MiB/10014msec) 00:29:06.581 slat (nsec): min=5257, max=96014, avg=23729.03, stdev=11676.36 00:29:06.581 clat (usec): min=12750, max=60340, avg=33321.42, stdev=4955.90 00:29:06.581 lat (usec): min=12776, max=60371, avg=33345.15, stdev=4956.06 00:29:06.581 clat percentiles (usec): 00:29:06.581 | 1.00th=[19530], 5.00th=[27919], 10.00th=[30802], 20.00th=[31589], 00:29:06.581 | 30.00th=[31851], 40.00th=[32375], 50.00th=[32375], 60.00th=[32637], 00:29:06.581 | 70.00th=[32900], 80.00th=[33817], 90.00th=[37487], 95.00th=[43779], 00:29:06.581 | 99.00th=[54264], 99.50th=[56886], 99.90th=[57410], 99.95th=[60556], 00:29:06.581 | 99.99th=[60556] 00:29:06.581 bw ( KiB/s): min= 1763, max= 2048, per=4.11%, avg=1907.11, stdev=71.57, samples=19 00:29:06.581 iops : min= 440, max= 512, avg=476.74, stdev=17.98, samples=19 00:29:06.581 lat (msec) : 20=1.11%, 50=97.41%, 100=1.48% 00:29:06.581 cpu : usr=97.96%, sys=1.54%, ctx=20, majf=0, minf=9 00:29:06.581 IO depths : 1=1.2%, 2=2.9%, 4=13.5%, 8=69.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:29:06.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 complete : 0=0.0%, 4=91.8%, 8=3.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.581 issued rwts: total=4785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.581 filename2: (groupid=0, jobs=1): err= 0: pid=644175: Wed May 15 07:06:20 2024 00:29:06.581 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10008msec) 00:29:06.581 slat (usec): min=4, max=206, avg=27.94, stdev=11.81 00:29:06.581 clat (usec): min=4479, max=55165, avg=31863.29, stdev=3667.94 00:29:06.581 lat (usec): min=4483, max=55201, avg=31891.23, stdev=3670.62 00:29:06.581 clat percentiles (usec): 00:29:06.581 | 1.00th=[16450], 5.00th=[24773], 10.00th=[30540], 20.00th=[31327], 00:29:06.581 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:29:06.581 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[35390], 00:29:06.581 | 99.00th=[41681], 99.50th=[44827], 99.90th=[53216], 99.95th=[53216], 00:29:06.581 | 99.99th=[55313] 00:29:06.581 bw ( KiB/s): min= 1920, max= 2176, per=4.29%, avg=1994.00, stdev=83.90, samples=20 00:29:06.582 iops : min= 480, max= 544, avg=498.50, stdev=20.97, samples=20 00:29:06.582 lat (msec) : 10=0.32%, 20=1.54%, 50=97.86%, 100=0.28% 00:29:06.582 cpu : usr=91.70%, sys=3.77%, ctx=121, majf=0, minf=9 00:29:06.582 IO depths : 1=4.6%, 2=9.6%, 4=22.7%, 8=55.2%, 16=8.1%, 32=0.0%, >=64=0.0% 00:29:06.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 issued rwts: total=4989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.582 filename2: (groupid=0, jobs=1): err= 0: pid=644176: Wed May 15 07:06:20 2024 00:29:06.582 read: IOPS=468, BW=1872KiB/s (1917kB/s)(18.3MiB/10004msec) 00:29:06.582 slat (usec): min=11, max=1460, avg=70.15, stdev=37.83 00:29:06.582 clat (usec): min=8017, max=66230, avg=33837.37, stdev=6159.66 00:29:06.582 lat (usec): min=8100, max=66323, avg=33907.52, stdev=6159.78 00:29:06.582 clat percentiles (usec): 00:29:06.582 | 1.00th=[18220], 5.00th=[28443], 10.00th=[30540], 20.00th=[31065], 00:29:06.582 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32375], 60.00th=[32900], 00:29:06.582 | 70.00th=[33424], 80.00th=[35390], 90.00th=[42206], 95.00th=[46400], 00:29:06.582 | 99.00th=[53740], 99.50th=[58459], 99.90th=[62653], 99.95th=[63177], 00:29:06.582 | 99.99th=[66323] 00:29:06.582 bw ( KiB/s): min= 1672, max= 1992, per=4.03%, avg=1870.47, stdev=86.94, samples=19 00:29:06.582 iops : min= 418, max= 498, avg=467.58, stdev=21.77, samples=19 00:29:06.582 lat (msec) : 10=0.09%, 20=2.16%, 50=95.11%, 100=2.65% 00:29:06.582 cpu : usr=88.21%, sys=4.97%, ctx=424, majf=0, minf=9 00:29:06.582 IO depths : 1=0.2%, 2=1.3%, 4=9.7%, 8=73.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:29:06.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 complete : 0=0.0%, 4=91.2%, 8=6.0%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 issued rwts: total=4682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.582 filename2: (groupid=0, jobs=1): err= 0: pid=644177: Wed May 15 07:06:20 2024 00:29:06.582 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10003msec) 00:29:06.582 slat (nsec): min=7966, max=91516, avg=30462.68, stdev=13364.70 00:29:06.582 clat (usec): min=9346, max=53486, avg=31940.01, stdev=3301.58 00:29:06.582 lat (usec): min=9356, max=53516, avg=31970.47, stdev=3304.31 00:29:06.582 clat percentiles (usec): 00:29:06.582 | 1.00th=[15926], 5.00th=[28967], 10.00th=[30540], 20.00th=[31327], 00:29:06.582 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:29:06.582 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34866], 00:29:06.582 | 99.00th=[42206], 99.50th=[45351], 99.90th=[53216], 99.95th=[53216], 00:29:06.582 | 99.99th=[53740] 00:29:06.582 bw ( KiB/s): min= 1792, max= 2192, per=4.27%, avg=1984.84, stdev=89.48, samples=19 00:29:06.582 iops : min= 448, max= 548, avg=496.21, stdev=22.37, samples=19 00:29:06.582 lat (msec) : 10=0.24%, 20=1.59%, 50=98.01%, 100=0.16% 00:29:06.582 cpu : usr=98.44%, sys=1.12%, ctx=16, majf=0, minf=9 00:29:06.582 IO depths : 1=4.5%, 2=10.2%, 4=23.2%, 8=54.0%, 16=8.0%, 32=0.0%, >=64=0.0% 00:29:06.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 issued rwts: total=4970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.582 filename2: (groupid=0, jobs=1): err= 0: pid=644178: Wed May 15 07:06:20 2024 00:29:06.582 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10015msec) 00:29:06.582 slat (usec): min=6, max=1181, avg=48.33, stdev=38.56 00:29:06.582 clat (usec): min=14152, max=59195, avg=32364.07, stdev=3194.11 00:29:06.582 lat (usec): min=14163, max=59264, avg=32412.41, stdev=3197.11 00:29:06.582 clat percentiles (usec): 00:29:06.582 | 1.00th=[22676], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:29:06.582 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:29:06.582 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[35914], 00:29:06.582 | 99.00th=[47973], 99.50th=[50594], 99.90th=[57934], 99.95th=[58983], 00:29:06.582 | 99.99th=[58983] 00:29:06.582 bw ( KiB/s): min= 1792, max= 2048, per=4.20%, avg=1952.00, stdev=73.32, samples=19 00:29:06.582 iops : min= 448, max= 512, avg=488.00, stdev=18.33, samples=19 00:29:06.582 lat (msec) : 20=0.65%, 50=98.65%, 100=0.70% 00:29:06.582 cpu : usr=92.31%, sys=3.22%, ctx=200, majf=0, minf=9 00:29:06.582 IO depths : 1=5.3%, 2=11.1%, 4=23.5%, 8=52.8%, 16=7.4%, 32=0.0%, >=64=0.0% 00:29:06.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 issued rwts: total=4892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.582 filename2: (groupid=0, jobs=1): err= 0: pid=644179: Wed May 15 07:06:20 2024 00:29:06.582 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10003msec) 00:29:06.582 slat (usec): min=8, max=1292, avg=37.05, stdev=24.36 00:29:06.582 clat (usec): min=12056, max=56715, avg=32167.22, stdev=2001.98 00:29:06.582 lat (usec): min=12074, max=56797, avg=32204.26, stdev=2001.94 00:29:06.582 clat percentiles (usec): 00:29:06.582 | 1.00th=[29492], 5.00th=[30540], 10.00th=[30802], 20.00th=[31327], 00:29:06.582 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:29:06.582 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:29:06.582 | 99.00th=[36963], 99.50th=[40109], 99.90th=[53216], 99.95th=[56361], 00:29:06.582 | 99.99th=[56886] 00:29:06.582 bw ( KiB/s): min= 1920, max= 2048, per=4.24%, avg=1967.16, stdev=63.44, samples=19 00:29:06.582 iops : min= 480, max= 512, avg=491.79, stdev=15.86, samples=19 00:29:06.582 lat (msec) : 20=0.45%, 50=99.43%, 100=0.12% 00:29:06.582 cpu : usr=94.98%, sys=2.36%, ctx=73, majf=0, minf=9 00:29:06.582 IO depths : 1=5.9%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:06.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.582 filename2: (groupid=0, jobs=1): err= 0: pid=644180: Wed May 15 07:06:20 2024 00:29:06.582 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.7MiB/10003msec) 00:29:06.582 slat (nsec): min=7882, max=99214, avg=28056.30, stdev=14677.06 00:29:06.582 clat (usec): min=7559, max=56885, avg=33241.02, stdev=4446.69 00:29:06.582 lat (usec): min=7569, max=56903, avg=33269.08, stdev=4445.63 00:29:06.582 clat percentiles (usec): 00:29:06.582 | 1.00th=[21365], 5.00th=[30016], 10.00th=[30802], 20.00th=[31589], 00:29:06.582 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:29:06.582 | 70.00th=[32900], 80.00th=[33817], 90.00th=[38011], 95.00th=[42730], 00:29:06.582 | 99.00th=[49546], 99.50th=[53216], 99.90th=[56361], 99.95th=[56886], 00:29:06.582 | 99.99th=[56886] 00:29:06.582 bw ( KiB/s): min= 1536, max= 2032, per=4.11%, avg=1907.79, stdev=109.38, samples=19 00:29:06.582 iops : min= 384, max= 508, avg=476.95, stdev=27.34, samples=19 00:29:06.582 lat (msec) : 10=0.04%, 20=0.56%, 50=98.48%, 100=0.92% 00:29:06.582 cpu : usr=98.12%, sys=1.50%, ctx=17, majf=0, minf=9 00:29:06.582 IO depths : 1=0.6%, 2=3.2%, 4=15.5%, 8=67.1%, 16=13.5%, 32=0.0%, >=64=0.0% 00:29:06.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 complete : 0=0.0%, 4=92.5%, 8=3.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 issued rwts: total=4787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.582 filename2: (groupid=0, jobs=1): err= 0: pid=644181: Wed May 15 07:06:20 2024 00:29:06.582 read: IOPS=490, BW=1961KiB/s (2008kB/s)(19.2MiB/10019msec) 00:29:06.582 slat (usec): min=8, max=104, avg=31.46, stdev=11.80 00:29:06.582 clat (usec): min=22034, max=61392, avg=32365.79, stdev=2140.83 00:29:06.582 lat (usec): min=22047, max=61415, avg=32397.25, stdev=2139.81 00:29:06.582 clat percentiles (usec): 00:29:06.582 | 1.00th=[29492], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:29:06.582 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:29:06.582 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[34341], 00:29:06.582 | 99.00th=[37487], 99.50th=[39584], 99.90th=[61080], 99.95th=[61604], 00:29:06.582 | 99.99th=[61604] 00:29:06.582 bw ( KiB/s): min= 1792, max= 2048, per=4.22%, avg=1958.40, stdev=73.12, samples=20 00:29:06.582 iops : min= 448, max= 512, avg=489.60, stdev=18.28, samples=20 00:29:06.582 lat (msec) : 50=99.67%, 100=0.33% 00:29:06.582 cpu : usr=96.92%, sys=1.65%, ctx=69, majf=0, minf=9 00:29:06.582 IO depths : 1=5.9%, 2=12.0%, 4=24.8%, 8=50.6%, 16=6.6%, 32=0.0%, >=64=0.0% 00:29:06.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.582 filename2: (groupid=0, jobs=1): err= 0: pid=644182: Wed May 15 07:06:20 2024 00:29:06.582 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10007msec) 00:29:06.582 slat (nsec): min=7904, max=99404, avg=28369.14, stdev=14992.23 00:29:06.582 clat (usec): min=5613, max=63607, avg=33274.16, stdev=5297.62 00:29:06.582 lat (usec): min=5622, max=63624, avg=33302.53, stdev=5298.01 00:29:06.582 clat percentiles (usec): 00:29:06.582 | 1.00th=[15270], 5.00th=[28705], 10.00th=[30802], 20.00th=[31589], 00:29:06.582 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:29:06.582 | 70.00th=[32900], 80.00th=[33817], 90.00th=[38536], 95.00th=[43254], 00:29:06.582 | 99.00th=[53740], 99.50th=[56886], 99.90th=[57934], 99.95th=[63701], 00:29:06.582 | 99.99th=[63701] 00:29:06.582 bw ( KiB/s): min= 1792, max= 2016, per=4.09%, avg=1898.53, stdev=62.88, samples=19 00:29:06.582 iops : min= 448, max= 504, avg=474.63, stdev=15.72, samples=19 00:29:06.582 lat (msec) : 10=0.56%, 20=0.98%, 50=96.89%, 100=1.57% 00:29:06.582 cpu : usr=97.59%, sys=1.48%, ctx=43, majf=0, minf=9 00:29:06.582 IO depths : 1=0.3%, 2=1.0%, 4=13.3%, 8=71.3%, 16=14.1%, 32=0.0%, >=64=0.0% 00:29:06.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 complete : 0=0.0%, 4=92.2%, 8=4.0%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:06.582 issued rwts: total=4787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:06.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:06.582 00:29:06.582 Run status group 0 (all jobs): 00:29:06.582 READ: bw=45.3MiB/s (47.5MB/s), 1832KiB/s-1996KiB/s (1875kB/s-2044kB/s), io=454MiB (476MB), run=10003-10020msec 00:29:06.840 07:06:20 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:06.840 07:06:20 -- target/dif.sh@43 -- # local sub 00:29:06.840 07:06:20 -- target/dif.sh@45 -- # for sub in "$@" 00:29:06.840 07:06:20 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:06.840 07:06:20 -- target/dif.sh@36 -- # local sub_id=0 00:29:06.840 07:06:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:06.840 07:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:20 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 07:06:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:06.840 07:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:20 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 07:06:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:20 -- target/dif.sh@45 -- # for sub in "$@" 00:29:06.840 07:06:20 -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:06.840 07:06:20 -- target/dif.sh@36 -- # local sub_id=1 00:29:06.840 07:06:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:06.840 07:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:20 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 07:06:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:06.840 07:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:20 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 07:06:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:20 -- target/dif.sh@45 -- # for sub in "$@" 00:29:06.840 07:06:20 -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:06.840 07:06:20 -- target/dif.sh@36 -- # local sub_id=2 00:29:06.840 07:06:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:06.840 07:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:20 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 07:06:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:06.840 07:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:20 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:21 -- target/dif.sh@115 -- # NULL_DIF=1 00:29:06.840 07:06:21 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:06.840 07:06:21 -- target/dif.sh@115 -- # numjobs=2 00:29:06.840 07:06:21 -- target/dif.sh@115 -- # iodepth=8 00:29:06.840 07:06:21 -- target/dif.sh@115 -- # runtime=5 00:29:06.840 07:06:21 -- target/dif.sh@115 -- # files=1 00:29:06.840 07:06:21 -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:06.840 07:06:21 -- target/dif.sh@28 -- # local sub 00:29:06.840 07:06:21 -- target/dif.sh@30 -- # for sub in "$@" 00:29:06.840 07:06:21 -- target/dif.sh@31 -- # create_subsystem 0 00:29:06.840 07:06:21 -- target/dif.sh@18 -- # local sub_id=0 00:29:06.840 07:06:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:06.840 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 bdev_null0 00:29:06.840 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:06.840 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:06.840 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:06.840 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 [2024-05-15 07:06:21.036270] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.840 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:21 -- target/dif.sh@30 -- # for sub in "$@" 00:29:06.840 07:06:21 -- target/dif.sh@31 -- # create_subsystem 1 00:29:06.840 07:06:21 -- target/dif.sh@18 -- # local sub_id=1 00:29:06.840 07:06:21 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:06.840 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 bdev_null1 00:29:06.840 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:21 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:06.840 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:21 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:06.840 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:21 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.840 07:06:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.840 07:06:21 -- common/autotest_common.sh@10 -- # set +x 00:29:06.840 07:06:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.840 07:06:21 -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:06.840 07:06:21 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:06.840 07:06:21 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:06.840 07:06:21 -- nvmf/common.sh@520 -- # config=() 00:29:06.840 07:06:21 -- nvmf/common.sh@520 -- # local subsystem config 00:29:06.840 07:06:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:06.840 07:06:21 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.840 07:06:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:06.840 { 00:29:06.840 "params": { 00:29:06.840 "name": "Nvme$subsystem", 00:29:06.840 "trtype": "$TEST_TRANSPORT", 00:29:06.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:06.840 "adrfam": "ipv4", 00:29:06.840 "trsvcid": "$NVMF_PORT", 00:29:06.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:06.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:06.840 "hdgst": ${hdgst:-false}, 00:29:06.840 "ddgst": ${ddgst:-false} 00:29:06.840 }, 00:29:06.840 "method": "bdev_nvme_attach_controller" 00:29:06.840 } 00:29:06.840 EOF 00:29:06.840 )") 00:29:06.840 07:06:21 -- target/dif.sh@82 -- # gen_fio_conf 00:29:06.840 07:06:21 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:06.840 07:06:21 -- target/dif.sh@54 -- # local file 00:29:06.840 07:06:21 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:06.840 07:06:21 -- target/dif.sh@56 -- # cat 00:29:07.097 07:06:21 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:07.097 07:06:21 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:07.097 07:06:21 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:07.097 07:06:21 -- common/autotest_common.sh@1320 -- # shift 00:29:07.097 07:06:21 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:07.097 07:06:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:07.097 07:06:21 -- nvmf/common.sh@542 -- # cat 00:29:07.097 07:06:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:07.097 07:06:21 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:07.097 07:06:21 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:07.097 07:06:21 -- target/dif.sh@72 -- # (( file <= files )) 00:29:07.097 07:06:21 -- target/dif.sh@73 -- # cat 00:29:07.097 07:06:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:07.097 07:06:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:07.097 07:06:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:07.097 { 00:29:07.097 "params": { 00:29:07.097 "name": "Nvme$subsystem", 00:29:07.097 "trtype": "$TEST_TRANSPORT", 00:29:07.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:07.097 "adrfam": "ipv4", 00:29:07.097 "trsvcid": "$NVMF_PORT", 00:29:07.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:07.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:07.097 "hdgst": ${hdgst:-false}, 00:29:07.098 "ddgst": ${ddgst:-false} 00:29:07.098 }, 00:29:07.098 "method": "bdev_nvme_attach_controller" 00:29:07.098 } 00:29:07.098 EOF 00:29:07.098 )") 00:29:07.098 07:06:21 -- target/dif.sh@72 -- # (( file++ )) 00:29:07.098 07:06:21 -- nvmf/common.sh@542 -- # cat 00:29:07.098 07:06:21 -- target/dif.sh@72 -- # (( file <= files )) 00:29:07.098 07:06:21 -- nvmf/common.sh@544 -- # jq . 00:29:07.098 07:06:21 -- nvmf/common.sh@545 -- # IFS=, 00:29:07.098 07:06:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:07.098 "params": { 00:29:07.098 "name": "Nvme0", 00:29:07.098 "trtype": "tcp", 00:29:07.098 "traddr": "10.0.0.2", 00:29:07.098 "adrfam": "ipv4", 00:29:07.098 "trsvcid": "4420", 00:29:07.098 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:07.098 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:07.098 "hdgst": false, 00:29:07.098 "ddgst": false 00:29:07.098 }, 00:29:07.098 "method": "bdev_nvme_attach_controller" 00:29:07.098 },{ 00:29:07.098 "params": { 00:29:07.098 "name": "Nvme1", 00:29:07.098 "trtype": "tcp", 00:29:07.098 "traddr": "10.0.0.2", 00:29:07.098 "adrfam": "ipv4", 00:29:07.098 "trsvcid": "4420", 00:29:07.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:07.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:07.098 "hdgst": false, 00:29:07.098 "ddgst": false 00:29:07.098 }, 00:29:07.098 "method": "bdev_nvme_attach_controller" 00:29:07.098 }' 00:29:07.098 07:06:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:07.098 07:06:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:07.098 07:06:21 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:07.098 07:06:21 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:07.098 07:06:21 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:07.098 07:06:21 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:07.098 07:06:21 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:07.098 07:06:21 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:07.098 07:06:21 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:07.098 07:06:21 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:07.098 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:07.098 ... 00:29:07.098 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:07.098 ... 00:29:07.098 fio-3.35 00:29:07.098 Starting 4 threads 00:29:07.355 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.919 [2024-05-15 07:06:22.014389] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:29:07.919 [2024-05-15 07:06:22.014473] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:29:13.179 00:29:13.179 filename0: (groupid=0, jobs=1): err= 0: pid=645608: Wed May 15 07:06:27 2024 00:29:13.179 read: IOPS=2059, BW=16.1MiB/s (16.9MB/s)(80.5MiB/5002msec) 00:29:13.179 slat (nsec): min=7083, max=55392, avg=10962.22, stdev=4622.10 00:29:13.179 clat (usec): min=1643, max=46997, avg=3851.24, stdev=2133.37 00:29:13.179 lat (usec): min=1657, max=47011, avg=3862.20, stdev=2133.35 00:29:13.179 clat percentiles (usec): 00:29:13.179 | 1.00th=[ 2278], 5.00th=[ 2671], 10.00th=[ 2868], 20.00th=[ 3130], 00:29:13.179 | 30.00th=[ 3326], 40.00th=[ 3523], 50.00th=[ 3720], 60.00th=[ 3884], 00:29:13.179 | 70.00th=[ 4047], 80.00th=[ 4293], 90.00th=[ 4752], 95.00th=[ 5145], 00:29:13.179 | 99.00th=[ 5932], 99.50th=[ 6456], 99.90th=[45351], 99.95th=[46924], 00:29:13.179 | 99.99th=[46924] 00:29:13.179 bw ( KiB/s): min=14592, max=17584, per=26.69%, avg=16440.56, stdev=906.03, samples=9 00:29:13.179 iops : min= 1824, max= 2198, avg=2055.00, stdev=113.18, samples=9 00:29:13.179 lat (msec) : 2=0.23%, 4=67.27%, 10=32.25%, 20=0.02%, 50=0.23% 00:29:13.179 cpu : usr=94.64%, sys=4.86%, ctx=7, majf=0, minf=53 00:29:13.179 IO depths : 1=0.2%, 2=2.7%, 4=67.2%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:13.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.179 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.179 issued rwts: total=10302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.179 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:13.179 filename0: (groupid=0, jobs=1): err= 0: pid=645609: Wed May 15 07:06:27 2024 00:29:13.179 read: IOPS=1634, BW=12.8MiB/s (13.4MB/s)(63.9MiB/5003msec) 00:29:13.179 slat (nsec): min=7328, max=48150, avg=12495.28, stdev=6117.10 00:29:13.179 clat (usec): min=1482, max=47648, avg=4852.37, stdev=4722.12 00:29:13.179 lat (usec): min=1489, max=47671, avg=4864.87, stdev=4722.01 00:29:13.179 clat percentiles (usec): 00:29:13.179 | 1.00th=[ 2442], 5.00th=[ 2966], 10.00th=[ 3261], 20.00th=[ 3589], 00:29:13.179 | 30.00th=[ 3884], 40.00th=[ 4047], 50.00th=[ 4228], 60.00th=[ 4490], 00:29:13.179 | 70.00th=[ 4752], 80.00th=[ 5145], 90.00th=[ 5669], 95.00th=[ 6063], 00:29:13.179 | 99.00th=[44827], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:29:13.179 | 99.99th=[47449] 00:29:13.179 bw ( KiB/s): min=11120, max=14528, per=20.93%, avg=12896.67, stdev=1230.84, samples=9 00:29:13.179 iops : min= 1390, max= 1816, avg=1612.00, stdev=153.87, samples=9 00:29:13.179 lat (msec) : 2=0.24%, 4=37.86%, 10=60.63%, 50=1.27% 00:29:13.179 cpu : usr=95.24%, sys=4.12%, ctx=25, majf=0, minf=35 00:29:13.179 IO depths : 1=0.6%, 2=4.1%, 4=68.2%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:13.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.179 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.179 issued rwts: total=8178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.179 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:13.179 filename1: (groupid=0, jobs=1): err= 0: pid=645610: Wed May 15 07:06:27 2024 00:29:13.179 read: IOPS=1808, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5003msec) 00:29:13.179 slat (nsec): min=7074, max=55996, avg=10462.23, stdev=4316.21 00:29:13.179 clat (usec): min=1803, max=46820, avg=4389.23, stdev=3556.85 00:29:13.179 lat (usec): min=1815, max=46834, avg=4399.69, stdev=3556.87 00:29:13.179 clat percentiles (usec): 00:29:13.179 | 1.00th=[ 2409], 5.00th=[ 2835], 10.00th=[ 3097], 20.00th=[ 3359], 00:29:13.179 | 30.00th=[ 3621], 40.00th=[ 3851], 50.00th=[ 4015], 60.00th=[ 4228], 00:29:13.179 | 70.00th=[ 4490], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5735], 00:29:13.179 | 99.00th=[ 6980], 99.50th=[44827], 99.90th=[46400], 99.95th=[46400], 00:29:13.179 | 99.99th=[46924] 00:29:13.179 bw ( KiB/s): min=11751, max=16608, per=23.40%, avg=14415.00, stdev=1664.80, samples=9 00:29:13.179 iops : min= 1468, max= 2076, avg=1801.78, stdev=208.27, samples=9 00:29:13.179 lat (msec) : 2=0.04%, 4=49.39%, 10=49.86%, 50=0.71% 00:29:13.179 cpu : usr=95.18%, sys=4.30%, ctx=7, majf=0, minf=56 00:29:13.179 IO depths : 1=0.5%, 2=3.2%, 4=69.5%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:13.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.179 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.179 issued rwts: total=9047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.179 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:13.179 filename1: (groupid=0, jobs=1): err= 0: pid=645611: Wed May 15 07:06:27 2024 00:29:13.179 read: IOPS=2198, BW=17.2MiB/s (18.0MB/s)(85.9MiB/5002msec) 00:29:13.179 slat (nsec): min=7016, max=47306, avg=10910.75, stdev=4480.93 00:29:13.179 clat (usec): min=1185, max=10184, avg=3605.70, stdev=788.75 00:29:13.179 lat (usec): min=1199, max=10209, avg=3616.61, stdev=788.91 00:29:13.179 clat percentiles (usec): 00:29:13.179 | 1.00th=[ 2024], 5.00th=[ 2442], 10.00th=[ 2671], 20.00th=[ 2966], 00:29:13.179 | 30.00th=[ 3163], 40.00th=[ 3359], 50.00th=[ 3556], 60.00th=[ 3752], 00:29:13.179 | 70.00th=[ 3949], 80.00th=[ 4146], 90.00th=[ 4621], 95.00th=[ 5014], 00:29:13.179 | 99.00th=[ 5800], 99.50th=[ 6325], 99.90th=[ 7111], 99.95th=[ 8160], 00:29:13.180 | 99.99th=[10159] 00:29:13.180 bw ( KiB/s): min=16464, max=19264, per=28.66%, avg=17656.56, stdev=1000.56, samples=9 00:29:13.180 iops : min= 2058, max= 2408, avg=2207.00, stdev=125.07, samples=9 00:29:13.180 lat (msec) : 2=0.95%, 4=72.12%, 10=26.92%, 20=0.02% 00:29:13.180 cpu : usr=94.28%, sys=5.20%, ctx=7, majf=0, minf=40 00:29:13.180 IO depths : 1=0.3%, 2=3.4%, 4=66.7%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:13.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.180 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.180 issued rwts: total=10997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.180 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:13.180 00:29:13.180 Run status group 0 (all jobs): 00:29:13.180 READ: bw=60.2MiB/s (63.1MB/s), 12.8MiB/s-17.2MiB/s (13.4MB/s-18.0MB/s), io=301MiB (316MB), run=5002-5003msec 00:29:13.438 07:06:27 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:13.438 07:06:27 -- target/dif.sh@43 -- # local sub 00:29:13.438 07:06:27 -- target/dif.sh@45 -- # for sub in "$@" 00:29:13.438 07:06:27 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:13.438 07:06:27 -- target/dif.sh@36 -- # local sub_id=0 00:29:13.438 07:06:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:13.438 07:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.438 07:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.438 07:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.438 07:06:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:13.438 07:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.438 07:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.438 07:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.438 07:06:27 -- target/dif.sh@45 -- # for sub in "$@" 00:29:13.438 07:06:27 -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:13.438 07:06:27 -- target/dif.sh@36 -- # local sub_id=1 00:29:13.438 07:06:27 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:13.438 07:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.438 07:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.438 07:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.438 07:06:27 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:13.438 07:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.438 07:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.438 07:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.438 00:29:13.438 real 0m24.410s 00:29:13.438 user 4m25.619s 00:29:13.438 sys 0m9.114s 00:29:13.438 07:06:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:13.438 07:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.438 ************************************ 00:29:13.438 END TEST fio_dif_rand_params 00:29:13.438 ************************************ 00:29:13.438 07:06:27 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:13.438 07:06:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:13.438 07:06:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:13.438 07:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.438 ************************************ 00:29:13.438 START TEST fio_dif_digest 00:29:13.438 ************************************ 00:29:13.438 07:06:27 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:29:13.438 07:06:27 -- target/dif.sh@123 -- # local NULL_DIF 00:29:13.438 07:06:27 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:13.438 07:06:27 -- target/dif.sh@125 -- # local hdgst ddgst 00:29:13.438 07:06:27 -- target/dif.sh@127 -- # NULL_DIF=3 00:29:13.438 07:06:27 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:13.438 07:06:27 -- target/dif.sh@127 -- # numjobs=3 00:29:13.438 07:06:27 -- target/dif.sh@127 -- # iodepth=3 00:29:13.438 07:06:27 -- target/dif.sh@127 -- # runtime=10 00:29:13.438 07:06:27 -- target/dif.sh@128 -- # hdgst=true 00:29:13.438 07:06:27 -- target/dif.sh@128 -- # ddgst=true 00:29:13.438 07:06:27 -- target/dif.sh@130 -- # create_subsystems 0 00:29:13.438 07:06:27 -- target/dif.sh@28 -- # local sub 00:29:13.439 07:06:27 -- target/dif.sh@30 -- # for sub in "$@" 00:29:13.439 07:06:27 -- target/dif.sh@31 -- # create_subsystem 0 00:29:13.439 07:06:27 -- target/dif.sh@18 -- # local sub_id=0 00:29:13.439 07:06:27 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:13.439 07:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.439 07:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.439 bdev_null0 00:29:13.439 07:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.439 07:06:27 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:13.439 07:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.439 07:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.439 07:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.439 07:06:27 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:13.439 07:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.439 07:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.439 07:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.439 07:06:27 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:13.439 07:06:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.439 07:06:27 -- common/autotest_common.sh@10 -- # set +x 00:29:13.439 [2024-05-15 07:06:27.512613] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.439 07:06:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.439 07:06:27 -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:13.439 07:06:27 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:13.439 07:06:27 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:13.439 07:06:27 -- nvmf/common.sh@520 -- # config=() 00:29:13.439 07:06:27 -- nvmf/common.sh@520 -- # local subsystem config 00:29:13.439 07:06:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.439 07:06:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:13.439 07:06:27 -- target/dif.sh@82 -- # gen_fio_conf 00:29:13.439 07:06:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:13.439 { 00:29:13.439 "params": { 00:29:13.439 "name": "Nvme$subsystem", 00:29:13.439 "trtype": "$TEST_TRANSPORT", 00:29:13.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:13.439 "adrfam": "ipv4", 00:29:13.439 "trsvcid": "$NVMF_PORT", 00:29:13.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:13.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:13.439 "hdgst": ${hdgst:-false}, 00:29:13.439 "ddgst": ${ddgst:-false} 00:29:13.439 }, 00:29:13.439 "method": "bdev_nvme_attach_controller" 00:29:13.439 } 00:29:13.439 EOF 00:29:13.439 )") 00:29:13.439 07:06:27 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.439 07:06:27 -- target/dif.sh@54 -- # local file 00:29:13.439 07:06:27 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:13.439 07:06:27 -- target/dif.sh@56 -- # cat 00:29:13.439 07:06:27 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:13.439 07:06:27 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:13.439 07:06:27 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:13.439 07:06:27 -- common/autotest_common.sh@1320 -- # shift 00:29:13.439 07:06:27 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:13.439 07:06:27 -- nvmf/common.sh@542 -- # cat 00:29:13.439 07:06:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:13.439 07:06:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:13.439 07:06:27 -- target/dif.sh@72 -- # (( file <= files )) 00:29:13.439 07:06:27 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:13.439 07:06:27 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:13.439 07:06:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:13.439 07:06:27 -- nvmf/common.sh@544 -- # jq . 00:29:13.439 07:06:27 -- nvmf/common.sh@545 -- # IFS=, 00:29:13.439 07:06:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:13.439 "params": { 00:29:13.439 "name": "Nvme0", 00:29:13.439 "trtype": "tcp", 00:29:13.439 "traddr": "10.0.0.2", 00:29:13.439 "adrfam": "ipv4", 00:29:13.439 "trsvcid": "4420", 00:29:13.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:13.439 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:13.439 "hdgst": true, 00:29:13.439 "ddgst": true 00:29:13.439 }, 00:29:13.439 "method": "bdev_nvme_attach_controller" 00:29:13.439 }' 00:29:13.439 07:06:27 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:13.439 07:06:27 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:13.439 07:06:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:13.439 07:06:27 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:13.439 07:06:27 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:29:13.439 07:06:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:13.439 07:06:27 -- common/autotest_common.sh@1324 -- # asan_lib= 00:29:13.439 07:06:27 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:29:13.439 07:06:27 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:13.439 07:06:27 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:13.698 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:13.698 ... 00:29:13.698 fio-3.35 00:29:13.698 Starting 3 threads 00:29:13.698 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.264 [2024-05-15 07:06:28.279507] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:29:14.264 [2024-05-15 07:06:28.279565] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:29:24.227 00:29:24.227 filename0: (groupid=0, jobs=1): err= 0: pid=646389: Wed May 15 07:06:38 2024 00:29:24.227 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(230MiB/10048msec) 00:29:24.227 slat (nsec): min=4277, max=37277, avg=12686.29, stdev=1821.86 00:29:24.227 clat (usec): min=7028, max=60743, avg=16316.73, stdev=6857.55 00:29:24.227 lat (usec): min=7042, max=60757, avg=16329.41, stdev=6857.59 00:29:24.227 clat percentiles (usec): 00:29:24.227 | 1.00th=[ 8029], 5.00th=[10552], 10.00th=[11863], 20.00th=[13435], 00:29:24.227 | 30.00th=[14615], 40.00th=[15401], 50.00th=[15926], 60.00th=[16319], 00:29:24.227 | 70.00th=[16712], 80.00th=[17171], 90.00th=[18220], 95.00th=[19006], 00:29:24.227 | 99.00th=[56361], 99.50th=[57410], 99.90th=[58983], 99.95th=[60556], 00:29:24.227 | 99.99th=[60556] 00:29:24.227 bw ( KiB/s): min=19456, max=27392, per=32.53%, avg=23552.00, stdev=2214.69, samples=20 00:29:24.227 iops : min= 152, max= 214, avg=184.00, stdev=17.30, samples=20 00:29:24.227 lat (msec) : 10=3.53%, 20=93.60%, 50=0.38%, 100=2.50% 00:29:24.227 cpu : usr=92.00%, sys=6.99%, ctx=29, majf=0, minf=119 00:29:24.227 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:24.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.227 issued rwts: total=1843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.227 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:24.227 filename0: (groupid=0, jobs=1): err= 0: pid=646390: Wed May 15 07:06:38 2024 00:29:24.227 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(245MiB/10047msec) 00:29:24.227 slat (nsec): min=4796, max=67395, avg=12563.47, stdev=2191.02 00:29:24.227 clat (usec): min=6753, max=59373, avg=15316.40, stdev=7248.25 00:29:24.227 lat (usec): min=6764, max=59384, avg=15328.97, stdev=7248.31 00:29:24.227 clat percentiles (usec): 00:29:24.227 | 1.00th=[ 7373], 5.00th=[10159], 10.00th=[11469], 20.00th=[12780], 00:29:24.227 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14484], 60.00th=[15008], 00:29:24.227 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16581], 95.00th=[17433], 00:29:24.227 | 99.00th=[56886], 99.50th=[57410], 99.90th=[59507], 99.95th=[59507], 00:29:24.227 | 99.99th=[59507] 00:29:24.227 bw ( KiB/s): min=18688, max=29440, per=34.65%, avg=25088.00, stdev=2987.76, samples=20 00:29:24.227 iops : min= 146, max= 230, avg=196.00, stdev=23.34, samples=20 00:29:24.227 lat (msec) : 10=4.69%, 20=92.46%, 100=2.85% 00:29:24.227 cpu : usr=92.63%, sys=6.61%, ctx=22, majf=0, minf=221 00:29:24.227 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:24.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.228 issued rwts: total=1963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.228 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:24.228 filename0: (groupid=0, jobs=1): err= 0: pid=646391: Wed May 15 07:06:38 2024 00:29:24.228 read: IOPS=186, BW=23.4MiB/s (24.5MB/s)(235MiB/10045msec) 00:29:24.228 slat (nsec): min=4349, max=34956, avg=12822.87, stdev=2137.12 00:29:24.228 clat (usec): min=7178, max=95593, avg=16006.06, stdev=10164.17 00:29:24.228 lat (usec): min=7190, max=95601, avg=16018.88, stdev=10164.12 00:29:24.228 clat percentiles (usec): 00:29:24.228 | 1.00th=[ 9634], 5.00th=[10814], 10.00th=[11600], 20.00th=[12518], 00:29:24.228 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:29:24.228 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15664], 95.00th=[53216], 00:29:24.228 | 99.00th=[55313], 99.50th=[55837], 99.90th=[94897], 99.95th=[95945], 00:29:24.228 | 99.99th=[95945] 00:29:24.228 bw ( KiB/s): min=19200, max=28472, per=33.17%, avg=24015.60, stdev=2100.19, samples=20 00:29:24.228 iops : min= 150, max= 222, avg=187.60, stdev=16.36, samples=20 00:29:24.228 lat (msec) : 10=1.97%, 20=91.80%, 50=0.05%, 100=6.18% 00:29:24.228 cpu : usr=92.29%, sys=7.22%, ctx=19, majf=0, minf=94 00:29:24.228 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.228 issued rwts: total=1878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.228 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:24.228 00:29:24.228 Run status group 0 (all jobs): 00:29:24.228 READ: bw=70.7MiB/s (74.1MB/s), 22.9MiB/s-24.4MiB/s (24.0MB/s-25.6MB/s), io=711MiB (745MB), run=10045-10048msec 00:29:24.792 07:06:38 -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:24.793 07:06:38 -- target/dif.sh@43 -- # local sub 00:29:24.793 07:06:38 -- target/dif.sh@45 -- # for sub in "$@" 00:29:24.793 07:06:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:24.793 07:06:38 -- target/dif.sh@36 -- # local sub_id=0 00:29:24.793 07:06:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:24.793 07:06:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.793 07:06:38 -- common/autotest_common.sh@10 -- # set +x 00:29:24.793 07:06:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.793 07:06:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:24.793 07:06:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.793 07:06:38 -- common/autotest_common.sh@10 -- # set +x 00:29:24.793 07:06:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.793 00:29:24.793 real 0m11.271s 00:29:24.793 user 0m29.035s 00:29:24.793 sys 0m2.374s 00:29:24.793 07:06:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.793 07:06:38 -- common/autotest_common.sh@10 -- # set +x 00:29:24.793 ************************************ 00:29:24.793 END TEST fio_dif_digest 00:29:24.793 ************************************ 00:29:24.793 07:06:38 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:24.793 07:06:38 -- target/dif.sh@147 -- # nvmftestfini 00:29:24.793 07:06:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:24.793 07:06:38 -- nvmf/common.sh@116 -- # sync 00:29:24.793 07:06:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:24.793 07:06:38 -- nvmf/common.sh@119 -- # set +e 00:29:24.793 07:06:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:24.793 07:06:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:24.793 rmmod nvme_tcp 00:29:24.793 rmmod nvme_fabrics 00:29:24.793 rmmod nvme_keyring 00:29:24.793 07:06:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:24.793 07:06:38 -- nvmf/common.sh@123 -- # set -e 00:29:24.793 07:06:38 -- nvmf/common.sh@124 -- # return 0 00:29:24.793 07:06:38 -- nvmf/common.sh@477 -- # '[' -n 639408 ']' 00:29:24.793 07:06:38 -- nvmf/common.sh@478 -- # killprocess 639408 00:29:24.793 07:06:38 -- common/autotest_common.sh@926 -- # '[' -z 639408 ']' 00:29:24.793 07:06:38 -- common/autotest_common.sh@930 -- # kill -0 639408 00:29:24.793 07:06:38 -- common/autotest_common.sh@931 -- # uname 00:29:24.793 07:06:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:24.793 07:06:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 639408 00:29:24.793 07:06:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:24.793 07:06:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:24.793 07:06:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 639408' 00:29:24.793 killing process with pid 639408 00:29:24.793 07:06:38 -- common/autotest_common.sh@945 -- # kill 639408 00:29:24.793 07:06:38 -- common/autotest_common.sh@950 -- # wait 639408 00:29:25.051 07:06:39 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:29:25.051 07:06:39 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:26.426 Waiting for block devices as requested 00:29:26.426 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:26.426 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:26.426 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:26.684 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:26.684 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:26.684 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:26.684 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:26.684 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:26.943 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:26.943 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:26.943 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:26.943 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:27.201 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:27.201 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:27.201 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:27.460 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:27.460 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:27.460 07:06:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:27.460 07:06:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:27.460 07:06:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:27.460 07:06:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:27.460 07:06:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.460 07:06:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:27.460 07:06:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.996 07:06:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:29.996 00:29:29.996 real 1m8.177s 00:29:29.996 user 6m22.026s 00:29:29.996 sys 0m21.879s 00:29:29.996 07:06:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.996 07:06:43 -- common/autotest_common.sh@10 -- # set +x 00:29:29.996 ************************************ 00:29:29.996 END TEST nvmf_dif 00:29:29.996 ************************************ 00:29:29.996 07:06:43 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:29.996 07:06:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:29.996 07:06:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:29.996 07:06:43 -- common/autotest_common.sh@10 -- # set +x 00:29:29.996 ************************************ 00:29:29.996 START TEST nvmf_abort_qd_sizes 00:29:29.996 ************************************ 00:29:29.996 07:06:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:29.996 * Looking for test storage... 00:29:29.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.996 07:06:43 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.996 07:06:43 -- nvmf/common.sh@7 -- # uname -s 00:29:29.996 07:06:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.996 07:06:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.996 07:06:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.996 07:06:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.996 07:06:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.996 07:06:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.996 07:06:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.996 07:06:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.996 07:06:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.996 07:06:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.996 07:06:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:29.996 07:06:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:29.996 07:06:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.996 07:06:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.996 07:06:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.996 07:06:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.996 07:06:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.996 07:06:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.996 07:06:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.996 07:06:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.996 07:06:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.996 07:06:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.996 07:06:43 -- paths/export.sh@5 -- # export PATH 00:29:29.996 07:06:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.996 07:06:43 -- nvmf/common.sh@46 -- # : 0 00:29:29.996 07:06:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:29.996 07:06:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:29.996 07:06:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:29.996 07:06:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.996 07:06:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.996 07:06:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:29.996 07:06:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:29.996 07:06:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:29.996 07:06:43 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:29:29.996 07:06:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:29.996 07:06:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.996 07:06:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:29.996 07:06:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:29.996 07:06:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:29.996 07:06:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.996 07:06:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:29.996 07:06:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.996 07:06:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:29.996 07:06:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:29.996 07:06:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:29.996 07:06:43 -- common/autotest_common.sh@10 -- # set +x 00:29:31.907 07:06:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:31.907 07:06:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:31.907 07:06:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:31.907 07:06:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:31.907 07:06:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:31.907 07:06:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:31.907 07:06:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:31.907 07:06:46 -- nvmf/common.sh@294 -- # net_devs=() 00:29:31.907 07:06:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:31.907 07:06:46 -- nvmf/common.sh@295 -- # e810=() 00:29:31.907 07:06:46 -- nvmf/common.sh@295 -- # local -ga e810 00:29:31.907 07:06:46 -- nvmf/common.sh@296 -- # x722=() 00:29:31.907 07:06:46 -- nvmf/common.sh@296 -- # local -ga x722 00:29:31.907 07:06:46 -- nvmf/common.sh@297 -- # mlx=() 00:29:31.907 07:06:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:31.907 07:06:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.907 07:06:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.907 07:06:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.907 07:06:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.907 07:06:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.907 07:06:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.907 07:06:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.907 07:06:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.907 07:06:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.907 07:06:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.907 07:06:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.907 07:06:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:31.907 07:06:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:31.907 07:06:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:31.907 07:06:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:31.907 07:06:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:31.907 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:31.907 07:06:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:31.907 07:06:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:31.907 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:31.907 07:06:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:31.907 07:06:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:31.907 07:06:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.907 07:06:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:31.907 07:06:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.907 07:06:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:31.907 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:31.907 07:06:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.907 07:06:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:31.907 07:06:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.907 07:06:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:31.907 07:06:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.907 07:06:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:31.907 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:31.907 07:06:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.907 07:06:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:31.907 07:06:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:31.907 07:06:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:31.907 07:06:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:31.907 07:06:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.907 07:06:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.907 07:06:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.907 07:06:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:31.907 07:06:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.908 07:06:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.908 07:06:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:31.908 07:06:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.908 07:06:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.908 07:06:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:31.908 07:06:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:31.908 07:06:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.908 07:06:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.166 07:06:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.166 07:06:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.166 07:06:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:32.166 07:06:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.166 07:06:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.166 07:06:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.166 07:06:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:32.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:29:32.166 00:29:32.166 --- 10.0.0.2 ping statistics --- 00:29:32.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.166 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:29:32.166 07:06:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:29:32.166 00:29:32.166 --- 10.0.0.1 ping statistics --- 00:29:32.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.166 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:29:32.166 07:06:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.166 07:06:46 -- nvmf/common.sh@410 -- # return 0 00:29:32.166 07:06:46 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:29:32.166 07:06:46 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:33.542 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:33.542 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:33.542 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:33.542 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:33.542 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:33.542 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:33.542 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:33.542 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:33.542 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:33.542 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:33.543 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:33.543 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:33.543 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:33.543 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:33.543 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:33.543 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:34.477 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:29:34.735 07:06:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.735 07:06:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:34.735 07:06:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:34.735 07:06:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.735 07:06:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:34.735 07:06:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:34.736 07:06:48 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:29:34.736 07:06:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:34.736 07:06:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:34.736 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:29:34.736 07:06:48 -- nvmf/common.sh@469 -- # nvmfpid=651900 00:29:34.736 07:06:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:34.736 07:06:48 -- nvmf/common.sh@470 -- # waitforlisten 651900 00:29:34.736 07:06:48 -- common/autotest_common.sh@819 -- # '[' -z 651900 ']' 00:29:34.736 07:06:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.736 07:06:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:34.736 07:06:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.736 07:06:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:34.736 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:29:34.736 [2024-05-15 07:06:48.809650] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:29:34.736 [2024-05-15 07:06:48.809728] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.736 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.736 [2024-05-15 07:06:48.890346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.993 [2024-05-15 07:06:49.006699] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:34.993 [2024-05-15 07:06:49.006868] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.993 [2024-05-15 07:06:49.006887] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.993 [2024-05-15 07:06:49.006902] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.993 [2024-05-15 07:06:49.006994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.993 [2024-05-15 07:06:49.007052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.993 [2024-05-15 07:06:49.007096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.993 [2024-05-15 07:06:49.007098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.558 07:06:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:35.558 07:06:49 -- common/autotest_common.sh@852 -- # return 0 00:29:35.558 07:06:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:35.558 07:06:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:35.558 07:06:49 -- common/autotest_common.sh@10 -- # set +x 00:29:35.558 07:06:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.558 07:06:49 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:35.558 07:06:49 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:29:35.558 07:06:49 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:29:35.558 07:06:49 -- scripts/common.sh@311 -- # local bdf bdfs 00:29:35.558 07:06:49 -- scripts/common.sh@312 -- # local nvmes 00:29:35.558 07:06:49 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:29:35.558 07:06:49 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:35.558 07:06:49 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:29:35.558 07:06:49 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:29:35.558 07:06:49 -- scripts/common.sh@322 -- # uname -s 00:29:35.558 07:06:49 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:29:35.558 07:06:49 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:29:35.558 07:06:49 -- scripts/common.sh@327 -- # (( 1 )) 00:29:35.558 07:06:49 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:29:35.558 07:06:49 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:29:35.558 07:06:49 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:29:35.558 07:06:49 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:29:35.558 07:06:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:35.558 07:06:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:35.558 07:06:49 -- common/autotest_common.sh@10 -- # set +x 00:29:35.558 ************************************ 00:29:35.558 START TEST spdk_target_abort 00:29:35.558 ************************************ 00:29:35.558 07:06:49 -- common/autotest_common.sh@1104 -- # spdk_target 00:29:35.558 07:06:49 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:35.558 07:06:49 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:29:35.558 07:06:49 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:29:35.558 07:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.558 07:06:49 -- common/autotest_common.sh@10 -- # set +x 00:29:38.835 spdk_targetn1 00:29:38.835 07:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.835 07:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.835 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:29:38.835 [2024-05-15 07:06:52.611044] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.835 07:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:29:38.835 07:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.835 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:29:38.835 07:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:29:38.835 07:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.835 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:29:38.835 07:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:29:38.835 07:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.835 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:29:38.835 [2024-05-15 07:06:52.643310] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.835 07:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:38.835 07:06:52 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:38.835 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.120 Initializing NVMe Controllers 00:29:42.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:42.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:42.120 Initialization complete. Launching workers. 00:29:42.120 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8725, failed: 0 00:29:42.120 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1350, failed to submit 7375 00:29:42.120 success 805, unsuccess 545, failed 0 00:29:42.120 07:06:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:42.120 07:06:55 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:42.120 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.400 Initializing NVMe Controllers 00:29:45.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:45.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:45.400 Initialization complete. Launching workers. 00:29:45.400 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8788, failed: 0 00:29:45.400 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1231, failed to submit 7557 00:29:45.400 success 318, unsuccess 913, failed 0 00:29:45.400 07:06:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:45.400 07:06:59 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:45.400 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.677 Initializing NVMe Controllers 00:29:48.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:48.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:48.677 Initialization complete. Launching workers. 00:29:48.677 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32129, failed: 0 00:29:48.677 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2687, failed to submit 29442 00:29:48.677 success 558, unsuccess 2129, failed 0 00:29:48.677 07:07:02 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:29:48.677 07:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.677 07:07:02 -- common/autotest_common.sh@10 -- # set +x 00:29:48.677 07:07:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.677 07:07:02 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:48.677 07:07:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.677 07:07:02 -- common/autotest_common.sh@10 -- # set +x 00:29:49.607 07:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:49.607 07:07:03 -- target/abort_qd_sizes.sh@62 -- # killprocess 651900 00:29:49.607 07:07:03 -- common/autotest_common.sh@926 -- # '[' -z 651900 ']' 00:29:49.607 07:07:03 -- common/autotest_common.sh@930 -- # kill -0 651900 00:29:49.607 07:07:03 -- common/autotest_common.sh@931 -- # uname 00:29:49.607 07:07:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:49.607 07:07:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 651900 00:29:49.607 07:07:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:49.607 07:07:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:49.607 07:07:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 651900' 00:29:49.607 killing process with pid 651900 00:29:49.607 07:07:03 -- common/autotest_common.sh@945 -- # kill 651900 00:29:49.607 07:07:03 -- common/autotest_common.sh@950 -- # wait 651900 00:29:49.865 00:29:49.865 real 0m14.190s 00:29:49.865 user 0m55.628s 00:29:49.865 sys 0m2.784s 00:29:49.865 07:07:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.865 07:07:03 -- common/autotest_common.sh@10 -- # set +x 00:29:49.865 ************************************ 00:29:49.865 END TEST spdk_target_abort 00:29:49.865 ************************************ 00:29:49.865 07:07:03 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:29:49.865 07:07:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:49.865 07:07:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:49.865 07:07:03 -- common/autotest_common.sh@10 -- # set +x 00:29:49.865 ************************************ 00:29:49.865 START TEST kernel_target_abort 00:29:49.865 ************************************ 00:29:49.865 07:07:03 -- common/autotest_common.sh@1104 -- # kernel_target 00:29:49.865 07:07:03 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:29:49.865 07:07:03 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:29:49.865 07:07:03 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:29:49.865 07:07:03 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:29:49.865 07:07:03 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:29:49.865 07:07:03 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:49.865 07:07:03 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:49.865 07:07:03 -- nvmf/common.sh@627 -- # local block nvme 00:29:49.865 07:07:03 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:29:49.865 07:07:03 -- nvmf/common.sh@630 -- # modprobe nvmet 00:29:49.865 07:07:04 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:49.865 07:07:04 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:51.239 Waiting for block devices as requested 00:29:51.239 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:51.239 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:51.239 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:51.239 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:51.504 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:51.504 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:51.504 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:51.504 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:51.819 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:51.819 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:51.819 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:51.819 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:52.079 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:52.079 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:52.079 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:52.079 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:52.338 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:52.338 07:07:06 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:29:52.338 07:07:06 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:52.338 07:07:06 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:29:52.338 07:07:06 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:29:52.339 07:07:06 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:52.339 No valid GPT data, bailing 00:29:52.339 07:07:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:52.339 07:07:06 -- scripts/common.sh@393 -- # pt= 00:29:52.339 07:07:06 -- scripts/common.sh@394 -- # return 1 00:29:52.339 07:07:06 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:29:52.339 07:07:06 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:29:52.339 07:07:06 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:29:52.339 07:07:06 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:52.339 07:07:06 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:52.339 07:07:06 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:29:52.339 07:07:06 -- nvmf/common.sh@654 -- # echo 1 00:29:52.339 07:07:06 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:29:52.339 07:07:06 -- nvmf/common.sh@656 -- # echo 1 00:29:52.339 07:07:06 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:29:52.339 07:07:06 -- nvmf/common.sh@663 -- # echo tcp 00:29:52.339 07:07:06 -- nvmf/common.sh@664 -- # echo 4420 00:29:52.339 07:07:06 -- nvmf/common.sh@665 -- # echo ipv4 00:29:52.339 07:07:06 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:52.339 07:07:06 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:29:52.339 00:29:52.339 Discovery Log Number of Records 2, Generation counter 2 00:29:52.339 =====Discovery Log Entry 0====== 00:29:52.339 trtype: tcp 00:29:52.339 adrfam: ipv4 00:29:52.339 subtype: current discovery subsystem 00:29:52.339 treq: not specified, sq flow control disable supported 00:29:52.339 portid: 1 00:29:52.339 trsvcid: 4420 00:29:52.339 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:52.339 traddr: 10.0.0.1 00:29:52.339 eflags: none 00:29:52.339 sectype: none 00:29:52.339 =====Discovery Log Entry 1====== 00:29:52.339 trtype: tcp 00:29:52.339 adrfam: ipv4 00:29:52.339 subtype: nvme subsystem 00:29:52.339 treq: not specified, sq flow control disable supported 00:29:52.339 portid: 1 00:29:52.339 trsvcid: 4420 00:29:52.339 subnqn: kernel_target 00:29:52.339 traddr: 10.0.0.1 00:29:52.339 eflags: none 00:29:52.339 sectype: none 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:52.339 07:07:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:52.339 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.621 Initializing NVMe Controllers 00:29:55.621 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:55.621 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:55.621 Initialization complete. Launching workers. 00:29:55.621 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 25124, failed: 0 00:29:55.621 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 25124, failed to submit 0 00:29:55.621 success 0, unsuccess 25124, failed 0 00:29:55.621 07:07:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:55.621 07:07:09 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:55.621 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.901 Initializing NVMe Controllers 00:29:58.901 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:58.901 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:58.901 Initialization complete. Launching workers. 00:29:58.901 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 52394, failed: 0 00:29:58.901 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 13190, failed to submit 39204 00:29:58.901 success 0, unsuccess 13190, failed 0 00:29:58.901 07:07:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:58.901 07:07:12 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:58.901 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.181 Initializing NVMe Controllers 00:30:02.181 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:30:02.181 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:30:02.181 Initialization complete. Launching workers. 00:30:02.181 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 51658, failed: 0 00:30:02.181 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 12870, failed to submit 38788 00:30:02.181 success 0, unsuccess 12870, failed 0 00:30:02.181 07:07:15 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:30:02.181 07:07:15 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:30:02.181 07:07:15 -- nvmf/common.sh@677 -- # echo 0 00:30:02.181 07:07:15 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:30:02.181 07:07:15 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:30:02.181 07:07:15 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:02.181 07:07:15 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:30:02.181 07:07:15 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:30:02.181 07:07:15 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:30:02.181 00:30:02.181 real 0m11.777s 00:30:02.181 user 0m3.778s 00:30:02.181 sys 0m2.531s 00:30:02.181 07:07:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.181 07:07:15 -- common/autotest_common.sh@10 -- # set +x 00:30:02.181 ************************************ 00:30:02.181 END TEST kernel_target_abort 00:30:02.181 ************************************ 00:30:02.181 07:07:15 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:30:02.181 07:07:15 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:30:02.181 07:07:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:02.181 07:07:15 -- nvmf/common.sh@116 -- # sync 00:30:02.181 07:07:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:02.181 07:07:15 -- nvmf/common.sh@119 -- # set +e 00:30:02.181 07:07:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:02.181 07:07:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:02.181 rmmod nvme_tcp 00:30:02.181 rmmod nvme_fabrics 00:30:02.181 rmmod nvme_keyring 00:30:02.181 07:07:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:02.181 07:07:15 -- nvmf/common.sh@123 -- # set -e 00:30:02.181 07:07:15 -- nvmf/common.sh@124 -- # return 0 00:30:02.181 07:07:15 -- nvmf/common.sh@477 -- # '[' -n 651900 ']' 00:30:02.181 07:07:15 -- nvmf/common.sh@478 -- # killprocess 651900 00:30:02.181 07:07:15 -- common/autotest_common.sh@926 -- # '[' -z 651900 ']' 00:30:02.181 07:07:15 -- common/autotest_common.sh@930 -- # kill -0 651900 00:30:02.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (651900) - No such process 00:30:02.181 07:07:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 651900 is not found' 00:30:02.181 Process with pid 651900 is not found 00:30:02.181 07:07:15 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:30:02.181 07:07:15 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:03.116 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:30:03.116 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:30:03.116 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:30:03.116 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:30:03.116 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:30:03.116 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:30:03.116 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:30:03.116 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:30:03.116 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:30:03.116 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:30:03.116 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:30:03.116 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:30:03.116 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:30:03.116 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:30:03.116 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:30:03.116 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:30:03.374 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:30:03.375 07:07:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:03.375 07:07:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:03.375 07:07:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:03.375 07:07:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:03.375 07:07:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.375 07:07:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:03.375 07:07:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.277 07:07:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:05.277 00:30:05.277 real 0m35.807s 00:30:05.277 user 1m1.926s 00:30:05.277 sys 0m9.150s 00:30:05.277 07:07:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.277 07:07:19 -- common/autotest_common.sh@10 -- # set +x 00:30:05.277 ************************************ 00:30:05.277 END TEST nvmf_abort_qd_sizes 00:30:05.277 ************************************ 00:30:05.277 07:07:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:30:05.277 07:07:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:30:05.277 07:07:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:30:05.277 07:07:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:30:05.277 07:07:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:30:05.277 07:07:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:30:05.277 07:07:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:30:05.277 07:07:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:05.277 07:07:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:05.277 07:07:19 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:05.277 07:07:19 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:30:05.277 07:07:19 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:05.277 07:07:19 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:05.277 07:07:19 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:30:05.277 07:07:19 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:30:05.278 07:07:19 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:30:05.278 07:07:19 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:30:05.278 07:07:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:05.278 07:07:19 -- common/autotest_common.sh@10 -- # set +x 00:30:05.278 07:07:19 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:30:05.278 07:07:19 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:30:05.278 07:07:19 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:30:05.278 07:07:19 -- common/autotest_common.sh@10 -- # set +x 00:30:07.179 INFO: APP EXITING 00:30:07.179 INFO: killing all VMs 00:30:07.179 INFO: killing vhost app 00:30:07.179 INFO: EXIT DONE 00:30:08.553 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:30:08.553 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:30:08.553 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:30:08.553 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:30:08.553 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:30:08.553 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:30:08.553 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:30:08.553 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:30:08.553 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:30:08.553 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:30:08.553 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:30:08.553 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:30:08.553 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:30:08.553 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:30:08.553 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:30:08.553 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:30:08.553 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:30:09.933 Cleaning 00:30:09.933 Removing: /var/run/dpdk/spdk0/config 00:30:09.933 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:09.933 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:09.933 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:09.933 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:09.933 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:09.933 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:09.933 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:09.933 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:09.933 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:09.933 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:09.933 Removing: /var/run/dpdk/spdk1/config 00:30:09.933 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:09.933 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:09.933 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:09.933 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:09.933 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:09.933 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:09.933 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:09.933 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:09.933 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:09.933 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:09.933 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:09.933 Removing: /var/run/dpdk/spdk2/config 00:30:09.933 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:09.933 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:09.933 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:09.933 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:09.933 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:09.933 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:09.933 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:09.933 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:09.933 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:09.933 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:09.933 Removing: /var/run/dpdk/spdk3/config 00:30:09.933 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:09.933 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:09.933 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:09.933 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:09.933 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:09.933 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:09.933 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:09.933 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:09.933 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:09.933 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:09.933 Removing: /var/run/dpdk/spdk4/config 00:30:09.933 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:09.933 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:09.933 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:09.933 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:09.933 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:09.933 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:09.934 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:09.934 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:09.934 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:09.934 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:09.934 Removing: /dev/shm/bdev_svc_trace.1 00:30:09.934 Removing: /dev/shm/nvmf_trace.0 00:30:09.934 Removing: /dev/shm/spdk_tgt_trace.pid371808 00:30:09.934 Removing: /var/run/dpdk/spdk0 00:30:09.934 Removing: /var/run/dpdk/spdk1 00:30:09.934 Removing: /var/run/dpdk/spdk2 00:30:09.934 Removing: /var/run/dpdk/spdk3 00:30:09.934 Removing: /var/run/dpdk/spdk4 00:30:09.934 Removing: /var/run/dpdk/spdk_pid370118 00:30:09.934 Removing: /var/run/dpdk/spdk_pid370872 00:30:09.934 Removing: /var/run/dpdk/spdk_pid371808 00:30:09.934 Removing: /var/run/dpdk/spdk_pid372288 00:30:09.934 Removing: /var/run/dpdk/spdk_pid373518 00:30:09.934 Removing: /var/run/dpdk/spdk_pid374450 00:30:09.934 Removing: /var/run/dpdk/spdk_pid374755 00:30:09.934 Removing: /var/run/dpdk/spdk_pid374966 00:30:09.934 Removing: /var/run/dpdk/spdk_pid375300 00:30:09.934 Removing: /var/run/dpdk/spdk_pid375497 00:30:09.934 Removing: /var/run/dpdk/spdk_pid375654 00:30:09.934 Removing: /var/run/dpdk/spdk_pid375942 00:30:09.934 Removing: /var/run/dpdk/spdk_pid376189 00:30:09.934 Removing: /var/run/dpdk/spdk_pid376590 00:30:09.934 Removing: /var/run/dpdk/spdk_pid379613 00:30:09.934 Removing: /var/run/dpdk/spdk_pid379791 00:30:09.934 Removing: /var/run/dpdk/spdk_pid380085 00:30:09.934 Removing: /var/run/dpdk/spdk_pid380220 00:30:09.934 Removing: /var/run/dpdk/spdk_pid380542 00:30:10.193 Removing: /var/run/dpdk/spdk_pid380675 00:30:10.193 Removing: /var/run/dpdk/spdk_pid381114 00:30:10.193 Removing: /var/run/dpdk/spdk_pid381254 00:30:10.193 Removing: /var/run/dpdk/spdk_pid381428 00:30:10.193 Removing: /var/run/dpdk/spdk_pid381570 00:30:10.193 Removing: /var/run/dpdk/spdk_pid381734 00:30:10.193 Removing: /var/run/dpdk/spdk_pid381878 00:30:10.193 Removing: /var/run/dpdk/spdk_pid382250 00:30:10.193 Removing: /var/run/dpdk/spdk_pid382472 00:30:10.193 Removing: /var/run/dpdk/spdk_pid382725 00:30:10.193 Removing: /var/run/dpdk/spdk_pid382903 00:30:10.193 Removing: /var/run/dpdk/spdk_pid383043 00:30:10.193 Removing: /var/run/dpdk/spdk_pid383110 00:30:10.193 Removing: /var/run/dpdk/spdk_pid383259 00:30:10.193 Removing: /var/run/dpdk/spdk_pid383540 00:30:10.193 Removing: /var/run/dpdk/spdk_pid383681 00:30:10.193 Removing: /var/run/dpdk/spdk_pid383840 00:30:10.193 Removing: /var/run/dpdk/spdk_pid384103 00:30:10.193 Removing: /var/run/dpdk/spdk_pid384268 00:30:10.193 Removing: /var/run/dpdk/spdk_pid384408 00:30:10.193 Removing: /var/run/dpdk/spdk_pid384688 00:30:10.193 Removing: /var/run/dpdk/spdk_pid384837 00:30:10.193 Removing: /var/run/dpdk/spdk_pid384996 00:30:10.193 Removing: /var/run/dpdk/spdk_pid385255 00:30:10.193 Removing: /var/run/dpdk/spdk_pid385426 00:30:10.193 Removing: /var/run/dpdk/spdk_pid385566 00:30:10.193 Removing: /var/run/dpdk/spdk_pid385844 00:30:10.193 Removing: /var/run/dpdk/spdk_pid385993 00:30:10.193 Removing: /var/run/dpdk/spdk_pid386153 00:30:10.193 Removing: /var/run/dpdk/spdk_pid386412 00:30:10.193 Removing: /var/run/dpdk/spdk_pid386580 00:30:10.193 Removing: /var/run/dpdk/spdk_pid386721 00:30:10.193 Removing: /var/run/dpdk/spdk_pid386999 00:30:10.193 Removing: /var/run/dpdk/spdk_pid387143 00:30:10.193 Removing: /var/run/dpdk/spdk_pid387308 00:30:10.193 Removing: /var/run/dpdk/spdk_pid387587 00:30:10.193 Removing: /var/run/dpdk/spdk_pid387748 00:30:10.193 Removing: /var/run/dpdk/spdk_pid387896 00:30:10.193 Removing: /var/run/dpdk/spdk_pid388174 00:30:10.193 Removing: /var/run/dpdk/spdk_pid388314 00:30:10.193 Removing: /var/run/dpdk/spdk_pid388482 00:30:10.193 Removing: /var/run/dpdk/spdk_pid388721 00:30:10.193 Removing: /var/run/dpdk/spdk_pid388901 00:30:10.193 Removing: /var/run/dpdk/spdk_pid389046 00:30:10.193 Removing: /var/run/dpdk/spdk_pid389313 00:30:10.193 Removing: /var/run/dpdk/spdk_pid389472 00:30:10.193 Removing: /var/run/dpdk/spdk_pid389640 00:30:10.193 Removing: /var/run/dpdk/spdk_pid389907 00:30:10.193 Removing: /var/run/dpdk/spdk_pid390070 00:30:10.193 Removing: /var/run/dpdk/spdk_pid390214 00:30:10.193 Removing: /var/run/dpdk/spdk_pid390493 00:30:10.193 Removing: /var/run/dpdk/spdk_pid390637 00:30:10.193 Removing: /var/run/dpdk/spdk_pid390802 00:30:10.193 Removing: /var/run/dpdk/spdk_pid390987 00:30:10.193 Removing: /var/run/dpdk/spdk_pid391325 00:30:10.193 Removing: /var/run/dpdk/spdk_pid393815 00:30:10.193 Removing: /var/run/dpdk/spdk_pid451555 00:30:10.193 Removing: /var/run/dpdk/spdk_pid454632 00:30:10.193 Removing: /var/run/dpdk/spdk_pid460919 00:30:10.193 Removing: /var/run/dpdk/spdk_pid464695 00:30:10.193 Removing: /var/run/dpdk/spdk_pid467650 00:30:10.193 Removing: /var/run/dpdk/spdk_pid468170 00:30:10.193 Removing: /var/run/dpdk/spdk_pid474576 00:30:10.193 Removing: /var/run/dpdk/spdk_pid474863 00:30:10.193 Removing: /var/run/dpdk/spdk_pid477837 00:30:10.193 Removing: /var/run/dpdk/spdk_pid482020 00:30:10.193 Removing: /var/run/dpdk/spdk_pid484138 00:30:10.193 Removing: /var/run/dpdk/spdk_pid491464 00:30:10.193 Removing: /var/run/dpdk/spdk_pid497571 00:30:10.193 Removing: /var/run/dpdk/spdk_pid498805 00:30:10.193 Removing: /var/run/dpdk/spdk_pid499482 00:30:10.193 Removing: /var/run/dpdk/spdk_pid510973 00:30:10.193 Removing: /var/run/dpdk/spdk_pid513747 00:30:10.193 Removing: /var/run/dpdk/spdk_pid517504 00:30:10.193 Removing: /var/run/dpdk/spdk_pid518725 00:30:10.193 Removing: /var/run/dpdk/spdk_pid520095 00:30:10.193 Removing: /var/run/dpdk/spdk_pid520244 00:30:10.193 Removing: /var/run/dpdk/spdk_pid520515 00:30:10.193 Removing: /var/run/dpdk/spdk_pid520669 00:30:10.193 Removing: /var/run/dpdk/spdk_pid521261 00:30:10.193 Removing: /var/run/dpdk/spdk_pid522631 00:30:10.193 Removing: /var/run/dpdk/spdk_pid523644 00:30:10.193 Removing: /var/run/dpdk/spdk_pid524103 00:30:10.193 Removing: /var/run/dpdk/spdk_pid528019 00:30:10.193 Removing: /var/run/dpdk/spdk_pid531892 00:30:10.193 Removing: /var/run/dpdk/spdk_pid535534 00:30:10.193 Removing: /var/run/dpdk/spdk_pid560940 00:30:10.193 Removing: /var/run/dpdk/spdk_pid563656 00:30:10.193 Removing: /var/run/dpdk/spdk_pid567917 00:30:10.193 Removing: /var/run/dpdk/spdk_pid569026 00:30:10.193 Removing: /var/run/dpdk/spdk_pid570273 00:30:10.193 Removing: /var/run/dpdk/spdk_pid573145 00:30:10.193 Removing: /var/run/dpdk/spdk_pid576198 00:30:10.193 Removing: /var/run/dpdk/spdk_pid581673 00:30:10.193 Removing: /var/run/dpdk/spdk_pid581694 00:30:10.193 Removing: /var/run/dpdk/spdk_pid585032 00:30:10.193 Removing: /var/run/dpdk/spdk_pid585171 00:30:10.193 Removing: /var/run/dpdk/spdk_pid585308 00:30:10.193 Removing: /var/run/dpdk/spdk_pid585579 00:30:10.193 Removing: /var/run/dpdk/spdk_pid585590 00:30:10.193 Removing: /var/run/dpdk/spdk_pid586699 00:30:10.193 Removing: /var/run/dpdk/spdk_pid587967 00:30:10.193 Removing: /var/run/dpdk/spdk_pid589252 00:30:10.193 Removing: /var/run/dpdk/spdk_pid590449 00:30:10.193 Removing: /var/run/dpdk/spdk_pid591691 00:30:10.193 Removing: /var/run/dpdk/spdk_pid592910 00:30:10.193 Removing: /var/run/dpdk/spdk_pid596952 00:30:10.193 Removing: /var/run/dpdk/spdk_pid597360 00:30:10.193 Removing: /var/run/dpdk/spdk_pid598459 00:30:10.193 Removing: /var/run/dpdk/spdk_pid599071 00:30:10.193 Removing: /var/run/dpdk/spdk_pid603005 00:30:10.193 Removing: /var/run/dpdk/spdk_pid605054 00:30:10.193 Removing: /var/run/dpdk/spdk_pid609190 00:30:10.193 Removing: /var/run/dpdk/spdk_pid613742 00:30:10.452 Removing: /var/run/dpdk/spdk_pid617703 00:30:10.452 Removing: /var/run/dpdk/spdk_pid618127 00:30:10.452 Removing: /var/run/dpdk/spdk_pid618679 00:30:10.452 Removing: /var/run/dpdk/spdk_pid619103 00:30:10.452 Removing: /var/run/dpdk/spdk_pid619694 00:30:10.452 Removing: /var/run/dpdk/spdk_pid620251 00:30:10.452 Removing: /var/run/dpdk/spdk_pid620801 00:30:10.452 Removing: /var/run/dpdk/spdk_pid621351 00:30:10.452 Removing: /var/run/dpdk/spdk_pid624316 00:30:10.452 Removing: /var/run/dpdk/spdk_pid624461 00:30:10.452 Removing: /var/run/dpdk/spdk_pid628734 00:30:10.452 Removing: /var/run/dpdk/spdk_pid628914 00:30:10.452 Removing: /var/run/dpdk/spdk_pid630560 00:30:10.452 Removing: /var/run/dpdk/spdk_pid636127 00:30:10.452 Removing: /var/run/dpdk/spdk_pid636134 00:30:10.452 Removing: /var/run/dpdk/spdk_pid639597 00:30:10.452 Removing: /var/run/dpdk/spdk_pid641035 00:30:10.452 Removing: /var/run/dpdk/spdk_pid642584 00:30:10.452 Removing: /var/run/dpdk/spdk_pid643973 00:30:10.452 Removing: /var/run/dpdk/spdk_pid645424 00:30:10.452 Removing: /var/run/dpdk/spdk_pid646324 00:30:10.452 Removing: /var/run/dpdk/spdk_pid652339 00:30:10.452 Removing: /var/run/dpdk/spdk_pid652742 00:30:10.452 Removing: /var/run/dpdk/spdk_pid653146 00:30:10.452 Removing: /var/run/dpdk/spdk_pid654732 00:30:10.452 Removing: /var/run/dpdk/spdk_pid655141 00:30:10.452 Removing: /var/run/dpdk/spdk_pid655554 00:30:10.452 Clean 00:30:10.452 killing process with pid 339526 00:30:18.594 killing process with pid 339523 00:30:18.594 killing process with pid 339525 00:30:18.594 killing process with pid 339524 00:30:18.594 07:07:32 -- common/autotest_common.sh@1436 -- # return 0 00:30:18.594 07:07:32 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:30:18.594 07:07:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:18.594 07:07:32 -- common/autotest_common.sh@10 -- # set +x 00:30:18.594 07:07:32 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:30:18.594 07:07:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:18.594 07:07:32 -- common/autotest_common.sh@10 -- # set +x 00:30:18.594 07:07:32 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:18.594 07:07:32 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:18.594 07:07:32 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:18.594 07:07:32 -- spdk/autotest.sh@394 -- # hash lcov 00:30:18.594 07:07:32 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:18.594 07:07:32 -- spdk/autotest.sh@396 -- # hostname 00:30:18.594 07:07:32 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:18.594 geninfo: WARNING: invalid characters removed from testname! 00:30:45.126 07:07:57 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:47.657 07:08:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:50.189 07:08:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:53.467 07:08:07 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:55.994 07:08:09 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:58.523 07:08:12 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:01.093 07:08:15 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:01.093 07:08:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:01.093 07:08:15 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:01.093 07:08:15 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.093 07:08:15 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.093 07:08:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.093 07:08:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.093 07:08:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.093 07:08:15 -- paths/export.sh@5 -- $ export PATH 00:31:01.093 07:08:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:01.093 07:08:15 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:01.093 07:08:15 -- common/autobuild_common.sh@435 -- $ date +%s 00:31:01.093 07:08:15 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1715749695.XXXXXX 00:31:01.093 07:08:15 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1715749695.JddigU 00:31:01.093 07:08:15 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:31:01.093 07:08:15 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:31:01.093 07:08:15 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:01.093 07:08:15 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:01.093 07:08:15 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:01.093 07:08:15 -- common/autobuild_common.sh@451 -- $ get_config_params 00:31:01.093 07:08:15 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:31:01.093 07:08:15 -- common/autotest_common.sh@10 -- $ set +x 00:31:01.093 07:08:15 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:31:01.093 07:08:15 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:31:01.093 07:08:15 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:01.093 07:08:15 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:01.093 07:08:15 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:01.093 07:08:15 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:01.093 07:08:15 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:01.093 07:08:15 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:01.093 07:08:15 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:01.093 07:08:15 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:01.093 07:08:15 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:01.093 + [[ -n 296615 ]] 00:31:01.093 + sudo kill 296615 00:31:01.104 [Pipeline] } 00:31:01.123 [Pipeline] // stage 00:31:01.129 [Pipeline] } 00:31:01.147 [Pipeline] // timeout 00:31:01.153 [Pipeline] } 00:31:01.172 [Pipeline] // catchError 00:31:01.178 [Pipeline] } 00:31:01.199 [Pipeline] // wrap 00:31:01.206 [Pipeline] } 00:31:01.222 [Pipeline] // catchError 00:31:01.235 [Pipeline] stage 00:31:01.238 [Pipeline] { (Epilogue) 00:31:01.255 [Pipeline] catchError 00:31:01.258 [Pipeline] { 00:31:01.274 [Pipeline] echo 00:31:01.276 Cleanup processes 00:31:01.283 [Pipeline] sh 00:31:01.570 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:01.570 667696 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:01.587 [Pipeline] sh 00:31:01.875 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:01.875 ++ grep -v 'sudo pgrep' 00:31:01.875 ++ awk '{print $1}' 00:31:01.875 + sudo kill -9 00:31:01.875 + true 00:31:01.888 [Pipeline] sh 00:31:02.171 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:12.151 [Pipeline] sh 00:31:12.433 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:12.433 Artifacts sizes are good 00:31:12.447 [Pipeline] archiveArtifacts 00:31:12.454 Archiving artifacts 00:31:12.658 [Pipeline] sh 00:31:12.939 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:12.952 [Pipeline] cleanWs 00:31:12.962 [WS-CLEANUP] Deleting project workspace... 00:31:12.962 [WS-CLEANUP] Deferred wipeout is used... 00:31:12.969 [WS-CLEANUP] done 00:31:12.971 [Pipeline] } 00:31:12.990 [Pipeline] // catchError 00:31:13.001 [Pipeline] sh 00:31:13.279 + logger -p user.info -t JENKINS-CI 00:31:13.287 [Pipeline] } 00:31:13.303 [Pipeline] // stage 00:31:13.308 [Pipeline] } 00:31:13.323 [Pipeline] // node 00:31:13.329 [Pipeline] End of Pipeline 00:31:13.356 Finished: SUCCESS